Building a Viable Ethic
An Example of Practical Philosophy
Contents
-
Why Ethics?
-
What Is a Core Value?
-
How Can a
Value Be both Constant and Adaptable?
-
How Does an Ethic
Distinguish between Good and Bad Behavior?
-
Are Rational Ethics
Too Complicated for Practical Use?
-
How Can We Be Sure Ethical
Theory Works?
|
|
Why Ethics?
We already have conscience, morality, and law, so why do we need ethics?
Conscience is very handy, but it's often
vague and inconsistent, relying on conditioned,
subjective feelings, which differ from one individual to the
next, and even in the same individual with mood and over time. In contrast, both
morality and law
tend to be rigid and can't cover all eventualities.
Consider, for example: Is truth an absolute value never to be
violated, or are there circumstances under which dishonesty is
permissible in order to avert a greater evil—say, lying in
order to prevent the deaths of innocents? We can view
ethics as a way to fill in the gaps in more traditional
methods of behavior control, making them simultaneously both
more adaptable and more consistent. Ethics is a way to
adapt morality to new, unusual, or complicated situations,
particularly where values come into conflict, or where
innovation gives rise to a kind of injustice previously
unknown (e.g., "software piracy," which didn't exist before
computers became common).
Perhaps without even realizing it, most
thinking people
fall back on some form of ethics when confronting situations
in which conscience, law, and traditional morality either
don't seem to apply or else are in conflict with each other. But even
here, ready-made ethics, such as ethics of business,
journalism, or medicine, are mostly applicable only to certain
situations, and are difficult or impossible to adapt to other
contexts.
Thus, it would be nice to have a general-purpose ethic that could be called
upon whenever the need arises, anchored upon some constant
core value, yet adjustable to the particulars of virtually
any conceivable situation.
Let's see how such an ethic might be
developed. As an example, I'll use an ethic upon which I
myself rely (though you might choose something else), an ethic
that I suspect many people nowadays share in general principle, but
which relatively few have taken the pains to think through in
any detail.
|
|
What Is a Core Value?
A core value is the value designated as the central
focus of an ethic. It's the one value considered supremely important, more important than any other. It's what commands our attention as the
ultimate guide and tie-breaker in any ethical problem. (Now, I
must beg the reader's pardon. We must briefly indulge in a
little abstract philosophy here, but only to make a point that should
make the following discussion easier to understand.) It's
important to understand that the words
importance and value are relational terms, not things unto
themselves. They're mind-dependent relationships, which express
the necessity or desirability of something and to someone.
Without a someone (a subject with a perceiving and
judging mind) to attribute importance or value to something (a
perceived object), importance and value simply do not exist in
any meaningful sense, and any statement about importance or value in
the absence of such referents is likewise meaningless. To put it another way,
importance and value (and similar terms, such as duty and
purpose) acquire meaning only with respect to both a subject
(mind) and an object (thing, person, action, or idea).
Different people have different notions of what's most
important—family, community, love, security, health, wealth, power,
freedom, justice, the environment, pleasing God, and so forth.
Value does not exist in any of these objects as a measurable
substance. Rather, we as subjects attribute varying degrees of
value to each object, according to the degree of benefit each of us
feels he or she derives from them, or which he or she would lose if
the objects were to be lost. One person feels that justice is
most valuable to him, another that family is most valuable
to her, and others that pleasing God is most important to them.
Now, "pleasing God" might seem an exception to some,
because they hold God to be innately important. Being moral is
how we please God, they say, and only God can know what the true
importance of morality is. However, while we may concede that
the pleasure of God is important to God, it's also important to the
believer, though for a different reason. Cynical though it might
sound, the believer's true motive
is, at least in part, that his own afterlife happiness depends on how pleasing his behavior
is to God. In other words, the believer wants to please God, not
only for the sake of pleasing God, but also (and probably mainly) to earn a pleasant
afterlife, or at least to avoid an unpleasant one. So we see
that, even where the religious believer is concerned, the personal importance of
morality actually distills to its importance to the subject himself,
because of what he hopes to gain by complying with it, or fears to lose
by violating
it.1
Although it might have a less than noble or pious ring, it's most honest to admit that what we hold to be of ultimate
importance is unquestionably important to us, whether anyone else shares this sense of importance
of the same object or some other. Thus, the ultimate
value for each person (at least for each person who has the courage
and integrity to be flatly honest about the matter) must be
interpreted in terms of "importance to me."
So, what is ultimately important to me?
It's a question we must answer before we can begin to build a
meaningful ethic, and each of us might answer it in different terms.
But if we were to distill all these individual statistics, we'd
probably find the final product to be something akin to
happiness—my own happiness.2 This doesn't sound
very noble either; in fact, we might be inclined to dismiss it as
self-indulgent hedonism, base and
unworthy. Still, if we're honest, we have to admit that personal
happiness (coupled with avoidance of pain) is a very effective motivator
for most people, and probably for ourselves. And if we study the
matter at all, we soon realize that hedonism is only the shallowest
sense of happiness.
As most of us learn through experience, real happiness is much more than that, involving such things as
love, health, prosperity, security, trust, social acceptance, and salvation or
vindication; and these require some investment of planning, effort,
and even self-discipline, to achieve and maintain. So, our
notion of happiness—or whatever else we determine our ultimate value
to be—evidently needs to be fleshed out considerably. Once we've
figured out the whole picture, we'll have both a clear goal and a
personal motive to achieve it—our core ethical value—around which we
can build an ethic, which we should then be able to apply to any problem or decision
having a moral aspect. But we obviously aren't there yet;
we've more figuring to do, concerning all the important factors that
affect happiness.
1. Albert Einstein commented, "If
people are good only because they fear punishment and hope for reward,
then we are a sorry lot!" But even Einstein would have to count
himself a member of this "sorry lot," since he was no less motivated
by hope for reward—albeit in intensely personal terms of "unbounded
admiration for the structure of the world so far as our science can
reveal it," not in terms of wealth, power, or salvation.
2. In communally oriented Eastern
traditions, happiness is customarily interpreted in terms of the honorable
reputation
of one's family and community, whereas in Western society happiness is
usually seen in terms of individual success. Although we'll
focus here on a Western view, we're sure to find some overlap if we
broaden our perspective even a little.
|
|
How Can a Value Be both Constant and
Adaptable?
Every rational ethic has a core value, which serves as
its compass. An effective ethic's core value must be firm enough
to provide reliable guidance, but also flexible enough to adapt to an
immense variety of situations. All human values are arbitrary,
in that they express human sentiments and interests, to which the rest of the universe is
utterly indifferent. However, we can limit their arbitrariness
by clearly understanding our own natures and needs, and expressing our
values in terms of these. To make an
ethic's core value both reliably firm and sufficiently flexible, we
must define it both consistently and comprehensively, with respect
to who and what we are, and to our legitimate interests. Let's
consider how someone might realistically outline the scope of his
or her concerns against this background reference:
-
I am a human being. As a self-aware being, I
have an interest in my own happiness.
-
Humans are social creatures. They derive benefit
from interacting with a social unit1 (which is to say:
self, family, friends, coworkers, clients, community, nation, species,
and planet).
-
Insofar as my happiness is affected by this benefit, I have an interest in the well-being of
my
social unit.
-
So, my behavior should at least not detract from, and
should preferably contribute to, the well-being of my social unit.
-
In addition, considering that the well-being of each
social unit and the happiness of each person derive in part from the
results of decisions and
behavior of previous generations, it behooves me to consider any
effects of my own decisions and behavior on future generations.
As applied to a social unit, the term well-being includes whatever factors enable it to yield the desired benefit
to its members. Among society's benefits are prosperity,
safety, and justice. Among the factors that make such benefits
possible are a just social order and competent leadership.
As the foregoing outline shows, and as any thinking
adult knows from experience, one's own happiness actually entails a
number of responsibilities to the social unit, and the appropriate
context of one's social unit depends on the particular issue in
question. Indeed, we might come to see real, enduring happiness
as synonymous with personal well-being, to include not only pleasure,
but also such essentials as health, security,
education, trade, and ready access to necessities—not to mention justice, domestic tranquility,
common defense, general welfare, liberty, and other benefits we've
come to take for granted since the late eighteenth century.2
One would begin the ethical assessment of any particular
behavior
starting with its possible consequences at the highest social level at
which the behavior would have a significant social impact. For scientific
research, it might be the "planet" or "species" level;
for public service, it's the "nation" and "community" levels;
in the case of marriage, it's the "family" level.
From there, one works one's way down, step-by-step, to the level of
individual self-interest, giving priority to the interests of the
higher relevant levels. Such a "telescoping" range of human
well-being allows us to adjust the scope of our core value to the
social context of the particular issue at hand.
In addition, the breadth of an adaptable ethic can be
expanded to include areas of particular interest, such as business,
politics, religion, or science. Someone especially interested in
environmental conservation, for example, might incorporate ideas on
resource conservation and renewal into the "planet" level of his or her
ethical
outlook. A Christian would want to include salvation as
a factor in reckonings at the "self" and "family" levels, whereas a
Buddhist would likely consider enlightenment a priority. An
artist or an athlete might prize personal accomplishment, while an
architect or an engineer might value the melding of conceptual form
and practical function as additional factors in a personally
meaningful consideration of well-being. In other words, pursuit
of human well-being needn't be narrowly aimed solely at self- and
species-preservation, but is capable of embracing both the dreams of
individuals and the aspirations of mankind—so long as we don't attempt
to impose our personal or group preferences on others, just as we
expect the same sort of forbearance from those others.
Thus, we can adjust the scope of an ethic to the level
of society appropriate to the issue at hand, and can also take a
broader range of issues into account, all the while being reliably
guided by the ethic's core value (human well-being, in the case of our
example). Next, we'll take a closer look at the assessment
process.
1. Even a hermit is not entirely
self-sufficient. He might raise and prepare his own food, and
make and mend his own clothes. Or he might cut his own timber,
mine and smelt his own minerals, and craft his own tools and weapons.
He might even write his own books and learn a little practical science
on his own, though painful trial and error. But he'd need more
hours than there are in a day to do all of these, all by himself, in
enough quantity to enjoy 24 hours of even a desperately ascetic
standard of living. Specialization, division of labor, trade,
and prosperity are made possible by social interaction. Social
interaction is facilitated by social order. And social order is
made possible by a tacit agreement of individuals to renounce enough of their
personal freedom to abide by common rules and standards, in the
interest of mutual benefit. (Philosopher Thomas Hobbes called
such an agreement a "social contract.")
2. Informed readers will note that the last few of these benefits are
explicitly named in the Preamble of the U.S. Constitution. It's
probably worth mentioning here that socially provided benefits are not
free; they come at a cost, and must be paid for somehow, if not
through charitable contributions or communal sharing, then through
either user fees or general taxation. The benefit of society is
its ability to secure such benefits at greater efficiency, more
equitable uniformity, and lower cost, than individuals could achieve
acting alone.
|
|
How Does an Ethic Distinguish between Good and Bad
Behavior?
In earlier times, morality was usually framed in terms
of personal virtue, obedience to God's will as revealed by the priests, and duty to one's king.
But even the ancients had a tough time explaining exactly what virtue
is. They "knew it when they saw it," and they wrote
and argued about
it a lot; but they couldn't seem to reach enduring agreement on its specifics. Then
along came the printing press; people got literate, started reading
scripture and philosophy for themselves, and began drawing their own conclusions.
Nowadays, virtue is more subjective than ever, not everyone agrees on whose God (if any) is in charge, and
in the developed world the Divine Right of Kings has been rendered obsolete by democratic law.
In recent times, ethics have come to center on the consequences
of people's decisions and actions, whether to oneself
(egoism), to others (altruism), to the greatest number
(utilitarianism), or to the government (statism1).
Each of these consequentialist ethics focuses on humanity from a particular viewpoint, and
thus each is arguably more appropriate in some contexts than in others.
Another sort of consequentialist ethic (one which I'm
not aware has a name, but which I'm inclined to call
context-governed ethics2) attempts to address moral problems in a
more dynamically balanced way, adjusting its focus to the social context of
the problem at hand. In general, consequentialist ethics rely extensively on
thoughtful consideration of issues, behaviors, and effects, using
reason, rather than one-size-fits-all answers from
one rulebook or another, to solve problems. Context-governed
ethics, the formulation of whose core value we considered in the
previous article, takes this a step further, in an effort to make
the solution a better fit for any problem at hand, whether it concerns
just a couple of individuals or all of humankind.
Rational ethics acquire their flexibility from the
active use of disciplined brainpower to evaluate information and to make decisions. They
require time and effort, both to formulate initially and to resolve
unusual situations as they arise. Thus, in this form, they are neither well
suited to situations urgently requiring instantaneous response, nor
appealing to the intellectually lazy who prefer textbook answers to
everything. But for people who think of thinking as a normal (or
even obligatory) part of their decision-making, rational ethics
offers ways to make the best (or at least to minimize the worst) of
real-world situations that aren't textbook-simple.
Any morality can deal passably with the simple
opposition of good and
evil, or with the greater of two goods, or even with the lesser of two
evils. But rational ethics can also deal with issues where none
of the available options is all good or all bad, but where each choice
entails a mixture of both benefit and harm. Rational thinking
allows us to weigh options, and to select the solution that yields the
greatest benefit and the least harm in a particular situation.
A consequentialist ethic's rational evaluative process centers on the idea that
a behavior can have any of three kinds of effects with regard to our ethic's core value:
if the behavior
promotes the core value, it's beneficial or "good;" if the behavior works against the value,
it's harmful or "bad;" or if the behavior has no effect related to the
value, it's ethically neutral. In some cases, the effects might be mixed, or depend on particular
circumstances. In such cases we must weigh the individual
effects, as well as the overall effect, to decide whether under the
circumstances the bad effects are negligible or tolerable in view of
the good.
To illustrate how the process works, let's use
a relatively simple, common example of behavior: honesty.
We begin by asking a non-leading question of the general form, "How
does this behavior affect our core ethical value?" Specifically,
in this instance, we ask:
How does honesty affect our core value of human well-being?
As we've already noted, we humans are social creatures; our standard
of living is vastly enhanced by interaction with others of our
species. Social interaction permits specialization, division of
labor, production of surpluses, and trade. But these don't just
happen. They depend on a
degree of social order—organized mutual cooperation—to establish the
customary structure to make these things possible. A key
element of this order is reliable information, without which we find
it difficult to make good choices and to avoid errors and
misunderstandings. Moreover, developing the habit of being
honest cultivates one's reputation of being trustworthy, and thus
makes the honest person a preferred business partner or employee. If this reputation is damaged, then the person finds
it more difficult to interact, because others don't trust him. Thus, it's generally to
everyone's advantage—that is, it enhances both a person's own well-being and that
of his associates and clients—to be consistently honest, even if doing so
occasionally works against his short-term interest. In addition,
setting a positive example for others (and getting good results) inspires them to do likewise;
this increases society's overall level of trust, hence its efficiency,
hence its potential benefit. So, as the saying goes, we see that
honesty is generally the best policy.
But is this true in every case? There's a
timeworn World War II example, in which an occupying Nazi officer
questions a Dutch citizen about where the local Jews are hiding.
If the citizen knows the answer, but suspects that innocents will come
to serious harm if he tells the truth, is he morally obligated to give
a truthful answer anyway? Let's frame the question in terms of our core
ethical value: Would honesty in this instance tend to benefit human
well-being, or to harm it? It would seem in this case
that the effect of an honest answer would very likely be harmful
to human well-being, if not at the citizen's or the officer's "self"
level (assuming the Dutchman can lie convincingly), then certainly at
the "community" level with respect to the citizen's Jewish neighbors.
So, although honesty is generally the best policy, evidently there can be
particular exceptions to that rule. And an exception needn't be
so dire; it might simply be a "white lie" to spare someone's feelings
when no good can come of the truth. For example, we could
be kind to old Aunt Zola by telling her that she looks especially nice
today, when even at her best she resembles a painted cadaver.
Telling her the truth would improve neither her well-being nor ours,
and the gentle lie harms no one.
Thus, we can incorporate the behavior of honesty into
our ethic as generally "good" (promoting well-being), but with the
awareness that honesty under some circumstances might be
"bad" (harmful to well-being). Still, honesty is more
beneficial than harmful in such a majority of cases that we'd
encourage it as a rule, even if we don't anticipate any personal
benefit from it in some instances. Cultivating the habit of
being honest prompts others to trust us, which offers the direct
advantage of making our interactions with others more fluid and less
stressful. But it also sets a positive example for others, some
of whom might emulate it. The cumulative effect of habitual
honesty works toward the well-being of society, enhancing the benefits
it can provide us, and thus indirectly benefits each of us in the long
term.
We'll likely get similar
results for many other common behaviors. Loyalty, for example,
we find generally beneficial to the stability and cohesion of the
social order; but it can lead to stagnation and corruption if indulged
when not deserved. Courage is generally a virtue if applied with
knowledgeable intent to the benefit of one's family, community, or
nation; but it's a pointless, self-destructive vice if indulged simply
to show off. Toleration is generally good when it promotes
acceptance of diversity, but bad if it condones or encourages destructive behavior.
On the other hand, piety is assumed good by those who prize
unquestioning faith (at least in what they themselves happen to
believe), but of dubious value by those who see it as a barrier to
critical inquiry into mind-controlling doctrines. Thus, whether
a behavior is evaluated as good or bad sometimes depends on the
situation, or occasionally on the perspective of the evaluator.
Now let's discuss a different kind of behavior:
stealing. Again, though we might have some preconceived
notions about stealing, we begin what we hope to be an unbiased
evaluation of it by asking the non-leading question:
How does stealing affect our core value of human well-being?
Again, we must go back to the matter of social order,
which makes possible the benefits that society has to offer to
individuals. One of the major features of social order is that
it allows each person to concentrate his or her labor upon what he or
she does best. This improves efficiency and enables the
production of surpluses, which can then be traded for goods and
services one needs or wants but doesn't have the time, talent, or training to
produce. For instance, a tailor doesn't produce his own food, but he
can produce
far more clothing than he needs for himself. He can thus trade
some of the surplus clothing to a farmer for meat and vegetables, and
to a baker for bread, and to a brewer for beer. Or, he can sell
his surplus clothing for money, which he can then exchange for
whatever he needs.
Stealing disrupts this flow of earned wealth, by
transferring something of value from an earner to a non-earner,
without the earner's consent, and without the non-earner's
contribution of anything of value in return. This sort of
behavior has several effects.
-
In the short term, unless he's caught, the thief's well-being is obviously increased, since his wealth has increased without his
having to earn it by engaging in productive labor.
-
The victim's well-being is diminished, because he has
less to show for his labor.
-
Instead of enjoying his wealth, the victim feels compelled to
invest in security and
insurance to protect what's left.
-
In the long term, if stealing becomes widespread, earners eventually
lose incentive to produce, because there's less chance they'll be able
to enjoy the fruits of their labor.3
-
Consequently, production and supplies decrease, and
market prices for goods and services increase, even beyond whatever's
being spent on additional security.
When allowed to go this far unchecked, these effects
lead to others. On the economic side, we'd see inflation,
diminished value of currency and savings, and increased insurance
rates; on the human side, we'd expect to see low morale and public
unrest, increasing demands on law enforcement, and resulting erosions
of liberty.
The parasitic practice of stealing introduces a
variety of inefficiencies into the socio-economic system. These
inefficiencies reduce the per-person benefit that society can offer.
In other words, the well-being of society is reduced, and the
well-being of individuals correspondingly suffers, on the whole to a greater degree
than the thief's is enhanced. Indeed, even the thief's future
well-being is adversely affected by society's diminished benefit to
him. So, we see that stealing is generally a bad thing with
respect to our core value.
But is stealing always bad? Suppose we find
ourselves in a tight spot: the economy is in a deep recession.
We've been laid off from work for an extended period, our unemployment
benefits have run out, our savings are used up, and the rent is due.
We can't borrow from family and friends, because they're experiencing
similar hard times, and even the local charities don't have enough
resources to go around. In such a case, we might justify
stealing food to keep our family from starving. Here,
we can't really say that stealing is good, but rational ethics nonetheless
enables us to choose stealing as the lesser of two evils in some
situations. (It also obligates us to compensate the victim of
our misdeed when we're able.)
So, we see good reason to rule some
behaviors generally good and others generally bad. Yet we must
acknowledge instances in which rules can be legitimately overridden by reason,
under circumstances that would otherwise result in greater detriment
to human well-being.
1. In theory, statism is concern for
the well-being of the state—the governing body, its institutions, and
its resources. In practice, however, statism
hasn't worked out very well. At the national level, its motives
have tended to degenerate into
individual ambition for power on the one hand, and fear of being
sacked (or worse) on the other. In extreme cases, it becomes
oligarchy (as in the People's Republic of China) or autocracy (as in
the Soviet Union under Stalin), while the genuine interests of a
healthy and beneficent state are ignored or trampled along with civic
interests of the people. Statism is listed here, not because there're
any current working examples, but simply to round out the spectrum of
theories (some of which actually do work).
2. While working on my own brand of
humanism in the 2000s, I came up with the idea of context-governed
ethics. It often turns out that concepts I've thought
of are not new, but have already been worked on by others for years.
So far, though, I haven't encountered any other work on this or a
parallel line of thought, so perhaps it's original.
3. This is analogous to what happened
in the 13th century, when many productive societies were conquered,
one by one, by Mongol warlord Genghis Khan. Deprived of the
fruits of their labor, the frustrated victims gave up producing.
And when the stock of goods that could be extorted from them had been
depleted, the invaders then had to move on to conquer
and plunder another productive civilization in order to survive.
Thievery in any form is essentially parasitic; it consumes without
producing anything of value, and thus imposes a burden on society.
|
|
Are Rational Ethics
Too Complicated for Practical Use?
The adaptability of rational ethics comes at a price:
it's clearly thought-intensive, in contrast to rule-based morality, which
requires only unquestioning obedience to memorized or documented lists
of dos and don'ts. Rational ethics requires an
investment in preparation, discipline, and effort, which not everyone
is prepared to make. But for those who are, the superior results
more than compensate for the time and effort expended, and are thus
well worth the cost.
Once the start-up effort is out of the way, the
day-to-day process can be streamlined. To keep rational ethics from becoming perpetually bogged down in
commonplace situations, we can make it a dual-approach operation, analogous
to the "act" and "rule" versions of utilitarianism.1
When developing a new ethic, or when using an existing ethic to
address a new problem, we must use the "act" approach. The first
few times we encounter a certain kind of problem, this more intensive
method empowers us to analyze it and find what seems a consistently
optimum general response. Once satisfied with the results, we can
tentatively consider
it a "rule" to apply the ethic in this tested way, so long as there're no complications. We may then set aside the more
labor-intensive "act" version to address special cases, such as when
we find two or more rule-based principles in conflict.
To recap our results so far: As a rule, honesty
is the wisest choice in most cases; but occasional misadventures
arising from irrational behavior can sometimes be thwarted by a
prudent falsehood or evasion. As a rule, loyalty and courage in
defense of principles of liberty and justice are laudable; but
overripe allegiance to a corrupt regime stands in the way of needed
change, and foolish bravado is not to be confused with genuine
courage. As a rule, toleration of diversity enriches the social
environment and gives rise to innovation; but acceptance of
ill-founded prejudice gives rise to friction and scapegoating, and
turns society against itself. As a rule, stealing is a parasitic
practice, which burdens not only the victim, but also the rest of
society; however, if stealing might be justifiable in desperate
circumstances, if it is the least bad option available.
Once we've evaluated these and many other behaviors
as generally good or generally bad, what we've compiled is a rough equivalent
of a rule-based morality. The difference is that rules developed
in this way are backed by reason and (in many cases) evidence.
They don't just assert that a behavior is good or bad; they
explain why and under what conditions it's good or bad.
We need no longer take morality on blind faith or authority. We can
figure it out. We can understand it. And if we find
something about it that doesn't work properly in today's world—e.g.,
it leads to an unfairness or fails to address an injustice—we can
fix it!
Rules aren't meant to be broken, but conflict with an
ethic's core value occasionally dictates that the rules be subjugated
to the higher priority. (Either that, or the core value itself
needs reexamining.) Even well intended and well crafted rules
aren't perfect; they can't foresee every eventuality. When their
imperfections threaten the very values they're intended to support,
then rule of reason must prevail over rule of thumb. This is
precisely why rational ethics is necessary, if not for every member of
society, then at least for policymakers.
Rational ethics presumes rational thinking, which
means a working knowledge of logic is a necessity, not merely an
option. Ethics practiced by irrational minds isn't just
inferior; it's a travesty, and often a supremely ugly one at that.
1. Utilitarian ethics is divided
into "act" and "rule" approaches. The first
is necessary for identifying and responding intelligently to new or
complex problems. The second relies on experience, and is
simpler and more expedient when dealing with familiar problems with no
complications. There's no reason that the same duality of
approaches couldn't be adapted to any consequentialist ethic.
|
|
How Can We Be Sure Ethical Theory Works?
Behavioral standards are important, beyond doubt.
Indeed, we could safely say they're essential to maintaining a society
stable enough to provide the benefits we need and have come to expect.
So they'd better work, because if they don't, we're in serious
trouble. How can we be sure that ethics will deliver as
expected? The traditional view of morality is that it's purely
conceptual; it's not an object, and can't be observed or measured in
any way. Thus, it's entirely outside the realm of scientific
investigation, and has been considered the rightful province of law,
tradition, religion, and philosophy.
But this view is now changing.
Consider that, in the natural sciences, there are things, such as
radio waves and x-rays, which can't be perceived directly, but whose
effects can nonetheless be observed, measured, and controlled.
Work in the social sciences has shown that, while we might be unable
to observe the processes of the human mind directly, we can to some
extent observe both the influences upon mind and the mind's resulting
effects on behavior. The human mind is extraordinarily complex
and highly individualistic, even so, we can observe general patterns
of influence and effect that apply to most normal people, as well as
patterns characteristic of groups afflicted with specific kinds of
psychological aberrations. (Moreover, we've been able to
understand many of these aberrations in physiological terms, and thus
have been able to develop effective treatments for them.) In
sizable populations, careful observation and control of variables can
yield meaningful statistical data showing the correspondence of
behavior to specific stimuli. Although as individuals humans
aren't very predictable, as groups their behavior exhibits statistical
patterns which are highly predictive of the general behavior of larger
populations.
So, why couldn't similar methods be developed to study
and guide ethics? The answer is that there seems no reason they
couldn't, on the condition that all relevant factors are identified
with enough precision to permit reliable independent verification and
rational analysis. With adequate control and honest accounting
for discrepancies, we should be able to identify relationships of
behavior, not only to the things that influence it, but also to its
effects upon clearly defined ethical values. After all, once
concepts of "good" and "bad" have been rescued from the miasma of
vagueness and confusion, and are defined instead in terms of some
specific and relevant value (as our core ethical value is), we then
have a clear reference by which to mark behavioral effects, whether
good or bad.
A problem that ethical experimentation presents is the
very control that must be imposed in order to conduct it in a way that
makes the results reliably consistent with reality. In the
natural sciences, where the objects of study are insensate matter,
energy, and cells, this isn't a problem. But in the social
sciences, where the objects of study are human beings and their
behavior, there arises the question of whether it's ethical to study a
person, or even to expose him or her to potentially hazardous or
embarrassing situations, without his or her informed consent.
However, if we inform the person beforehand, then he or she will
likely behave differently, and thus the results of the experiment
won't reflect "normal" behavior.
The field of psychology has already judged it
unethical to experiment upon people, or to expose them to potential
harm, without their informed consent. However, it has devised
methods of testing capable of obtaining realistic results without
compromising the subject's well-being. For example, a subject
may be told that she'll be observed at random for a total of sixty
minutes over a two-week period, but is not told which sixty
minutes will be observed, or even whether it will be a single hour or
a number of shorter increments. Or he may be told that he might
be exposed to a non-lethal virus, but not told what effects to expect
from it. He or she can then choose whether or not to engage in
the experiment based on the frank but non-specific information
offered. When necessary, the field of ethics might adopt similar
strategies for experimentation. For instance, prospective
subjects could be informed that they might, as part of the test, be
instructed to perform a morally questionable act, or that they might
be the target of such an act, and they could decide whether or not to
participate in the experiment on the basis of this limited
information.
Fortunately for us (though unfortunately for the
victims), examples of detrimental influences and behaviors we might
want to study can often be found both in current events and in
history. Records of crime, genocide, injustice, negligence,
oppression, persecution, slavery, wars, and the like yield a
gut-wrenching abundance of information about misfortune, unsavory
behavior, and their consequences, from ancient times to the present.
In many cases, enough reliable data could be gleaned from historical and
current accounts to provide working hypotheses, or sometimes even an
adequate equivalent of observational results, provided that the biases
of the perpetrators and of those who did the recording can be
ascertained, and adjusted for if necessary.
As a powerful example of how historical events provide
evidence to guide human action, let's compare the outcomes of the
First and Second World Wars. After World War I, harshly punitive
economic restrictions were imposed on the conquered nations of Germany
and Italy.
Economic hardship there gave rise to social unrest, which resulted in
the election of radical fascist regimes, which assumed dictatorial
power and instituted militaristic policies. And in only a couple
of decades, Europe once again plunged into war. Following World
War II, events took a different turn, when the United States
introduced the Marshall Plan to rebuild not only their allies, but
also their vanquished enemies.1 The result of this
grand "experiment" was astoundingly positive. In the ensuing
prosperity, nations that had traditionally warred among themselves for
centuries discovered that they had far more to gain through peaceful
cooperation than through military aggression, and there hasn't been a
significant outbreak of international hostility within Western Europe
(or in Japan, which was treated similarly) since. Now, although
it might be a stretch to apply the lessons from this multinational
series of events to personal ethics, it illustrates the general value
of consulting experience and history for evidence to guide human
behavior away from harmfully obsolete traditions into demonstrably
more beneficial channels.
To return to the topic of active research, in cases where the only
potential effects are favorable
or neutral, harm ceases to be a factor in ethics
experimentation. Subjects may ethically be observed in public
without their knowledge, provided that details of any such
observations are kept strictly confidential and the subjects are not
identified without their explicit consent. Still, as in science,
professional experimenters must take care to select samples, conduct tests, collect
evidence, and account for anomalies, in ways compatible with scientific
method if they're to be taken seriously.2
However, people for whom ethics is primarily a
personal matter needn't worry so much about whether their results can
withstand the rigors of scientific method; for them, reliability in
practice is what's important. For their own purposes, rational
adults who are well intentioned and well informed may find it adequate
to use whatever sources are readily available to them, including personal
experience, word of mouth, factual news media, and historical record, as
evidence for or against hypotheses of personal ethics. The
use of such non-systematically gathered evidence is expedient for
day-to-day decision-making, even if it doesn't meet the standards for
scientific investigation. Even a quasi-scientific approach to
ethics, using evidence, reason, and cool-headed reflection, should in
most cases turn out to be more consistently reliable than the odd brew
of Bronze-Age doctrine, tradition, gut emotion, and guesswork that
typically passes for morality and conscience.
1. An ulterior motive of the Marshall
Plan was to lure devastated and disoriented countries into alignment
with the West and away from alliance with Stalinist Russia. But
although this was arguably a case of doing the right thing for the
wrong reason, it provided strong evidence to justify the reevaluation
and replacement of obsolete norms, rather than complacent reliance on
tradition.
2. As far as I'm aware, no peer review
network yet exists for scientific experimentation in ethics.
This would probably have to be developed at the academic level,
perhaps in conjunction with psychological research, since support for
such a project from many leaders of government, private business, and
other institutions where the topic of ethics is routinely viewed with
contempt, might be anemic to non-existent.
|
|
To Be Continued
This webpage is a distillation of ideas that I've been
developing and integrating for several years, in my leisure moments. It's a work in progress, and might well
remain so for some time to come. Notwithstanding, I think that
even in an incomplete state the work might have value to others. So,
I've decided to publish it, and update it as things progress.
As
always, reader feedback, whether complimentary or critical, is welcome
and earnestly solicited. (While compliments are encouraging, I find criticism
far more useful. It calls my attention to things I've overlooked
or misunderstood, and thus helps me do a better job.)
=SAJ=
|
▲ |
WHY ETHICS
| CORE VALUE
|
CONSTANT BUT ADAPTABLE |
GOOD & BAD |
TOO COMPLICATED?
| DOES IT WORK?
|
▼
|