Monday, July 25, 2011

Flying at Night

Above us, stars. Beneath us, constellations.
Five billion miles away, a galaxy dies
like a snowflake falling on water. Below us,
some farmer, feeling the chill of that distant death,
snaps on his yard light, drawing his sheds and barn
back into the little system of his care.
All night, the cities, like shimmering novas,
tug with bright streets at lonely lights like his.

Ted Kooser

Solitary effort for the social activity



Solitary effort for the social activity, 2011
wood, acrylic, fabric
47 x 15 x 20 cm

Saturday, July 23, 2011

Le Nouvelliste et la defense religieuse


Le Nouvelliste,19 decembre 1898

Journal religieux, 1879-1889, Lyon.

All Hail ... Analog?

On Dec. 30, Dwayne's Photo in Parsons, Kan., stopped processing Kodachrome film and the world passed an important if little-heralded milestone: the end of Kodachrome, a beautifully saturated color transparency film that was immortalized by Paul Simon in his 1973 song ("Mama don't take my Kodachrome away"). Kodak had long since ceased its manufacture and the lab shutdown was yet another stage in the slow death of chemical, film-based photography.

Visual and audio reproduction have undergone massive changes as their underlying technologies shifted from analog to digital over the past two decades. It's clear that it is far more convenient to snap photos with a digital point-and-shoot or listen to music on an iPod. But whether the quality of images or music has improved is, however, a highly debatable proposition, one that is contested by legions of enthusiasts who have continued to cling to older technologies not out of Luddite resistance to change, but because they believe the shift to 1's and 0's is actually making things worse.

Photography and music have been hobbies of mine ever since I was a child when I built Dynakits and had my own darkroom. I was introduced to high-end audio by the political theorist Allan Bloom, who back in the early 1980s had what seemed to me a crazily expensive Linn Sondek turntable and a collection of over 2,000 records. I started collecting historical Nikons when I inherited an F2A from my father, and these days I seem to spend as much time thinking about gear as I do analyzing politics for my day job.

Let's begin with how photography has changed. Ansel Adams's iconic images of the Sierras were taken with an 8-inch-by-10-inch view camera, a wooden contraption with bellows in which the photographer saw his subject upside-down and reversed under a black cloth. Joel Meyerowitz's stunning photographs of Cape Cod were taken with a similar mahogany Deardorff view camera manufactured in the 1930s. These cameras produce negatives that contain up to 100 times the amount of information produced by a contemporary top-of-the-line digital SLR like a Canon EOS 5D or a Nikon D3. View cameras allow photographers to shift and tilt the lens relative to the film plane, which is why they continue to be used by architectural photographers who want to avoid photos of buildings with the converging vertical lines caused by the upward tilt of the lens on a normal camera. And their lenses can be stopped down to f/64 or even f/96, which allows everything to be in crystalline focus from 3 inches away to infinity. (Ansel Adams, Edward Weston and Imogen Cunningham were part of a group called "f/64" in celebration of this characteristic.)

Perhaps the most important feature of these older film cameras was their lack of convenience. They had to be mounted on tripods; it took many minutes to shoot a single frame; and they were hardly inconspicuous. In contrast to contemporary digital photographers who snap a zillion photos of the same subject and hope that one will turn out well composed, view camera photography is a more painterly activity that forces the photographer to slow down and think ahead carefully about subject, light, framing, time of day, and the like. These skills are in short supply among digital photographers.

Older cameras were far better built. A few years ago I was given a Leica M3 once owned by my uncle, who joined the U.S. Army to get out of an internment camp for Japanese-Americans during World War II. He was sent to Germany where he acquired the Leica around the time I was born. This camera, with its f/2 Summicron, a classic, fast, tack-sharp lens, still takes beautiful pictures. How many digital cameras will still be functioning five years from now, much less 50? Where are you going to buy new batteries and the media to store your photos in 2061?

The digital revolution produced even worse consequences for music reproduction. When the digital compact disc was introduced in the early 1980s, the marketing types heralded this as "perfect sound forever." The only problem was that early CDs simply didn't sound good: They were thin, harsh and unpleasant to listen to. It turns out that old-fashioned vinyl records, like photographic film, are actually a pretty good way of storing information. Sound is inherently analog; converting sound waves to grooves on a record does not involve the same loss of information as their conversion to digital data.

If you don't believe this, you've probably never listened to a good-quality record on a high-end turntable. By high-end, we're talking about turntable-tonearm-needle combinations that cost upward of $10,000—that's right, five figures—from manufacturers like VPI, Basis or SME. The highest-priced table currently on the market is the Clearaudio Statement, which retails for $150,000. On such devices, there is hardly any background noise; sound is three-dimensional, detailed and has a liveness that most mass-market CD players or iPods cannot begin to match

There is no inherent reason why digital music has to sound worse than analog; the problem was all in implementation and standards. About 10 years ago Sony introduced a new digital format, the Super Audio Compact Disc, that could finally hold its own against good quality vinyl records. But SACDs never caught on, and the mass market moved in exactly the opposite direction with the spread of MP3s and iPods. The MP3 is a digital compression technology that throws away a lot of information in order to reduce file size. It's quantity over quality, essentially. Listening to an iPod through a high-end audio system is a painful experience and a big step backward even from the Red Book CD standard by which all musical compact discs have been encoded since the 1980s.


Today, I've made peace with the digital revolution. There is a company called GigaPan that makes a device that pans a digital camera and stitches together large numbers of images to produce monster photographs with far more resolution than a view camera. A couple of years ago I built myself a music server—a computer dedicated to playing music that currently stores more than 300 gigabytes of losslessly compressed music (that is, better than your average MP3s), which it outputs through a high-quality Benchmark digital-to-analog converter. I can flip through five versions of a Mahler symphony with a mouse click, and honestly find it a lot more convenient and often better-sounding than spinning vinyl on my Oracle turntable.

Still—while I can now download high-resolution digital music from a company called HDtracks, the experience doesn't begin to compare with shopping at Serenade Records in Washington, a music emporium owned by a pair of brothers from Turkey who knew everything there was to know about historical recordings of classical music. When they retired and Serenade closed, my quality of life took a nose dive.

Don't believe the marketing hype of the techie types who tell you that newer is always better. Sometimes in technology, as in politics, we regress. This point will be brought home to lots of people when their hard disks crash and they find they've lost all of their photos of baby Tiffany forever. Photos of my children, by contrast, are safely stored in the closet in boxes of Kodachrome slides.

Corrections & Amplifications

Dwayne's Photo in Parsons, Kan., ceased processing Kodachrome film on Dec. 30 but continues to provide other photo-processing services. An earlier version of this essay incorrectly said Dwayne's Photo closed down on Dec. 30.
—Mr. Fukuyama is a senior fellow at the Freeman Spogli Institute for International Studies at Stanford University and author of "The Origins of Political Order: From Prehuman Times to the French Revolution" (forthcoming from Farrar, Straus & Giroux).

By Francis Fukuyama
Source: The Wall Street Journal, Feb. 26, 2011

Sunday, July 17, 2011

After Dark

When the sun goes down
sails and feathers leave
the river. An empty planet
burns in the sky
as lights come on
in the city, and ragged lines
collect before the doors
open, but you don’t arrive
until there’s nothing left
but chrome stars
flashing from the mikes,
the guitars, the drum kit,
and you realize
that you don’t know
anyone, anymore
who has known you all your life.

George Amabile

Work Consume Be Silent Die!


Ana Viviane Minorelli

Source: www.moriapoetry.com

Saturday, July 16, 2011

August Peonies

Lallygagging on bent stems, late
this year because of the snow
in May, their rag-tag magenta
cluster-heads freshen the still heat
like a rush of wind in the leaves
or the cool brush of deep sea
crinolines as the ripple kiss
of a breeze opens their bunched petals
just enough to let them breathe
before they ease back
into light repose, poised
at the edge of time-lapse
attention, like us, who lose
momentum in the heavy air
rich with the scent of ripening
wheat that drifts in from the fields
over the slow-moving river
as the afternoon nods and lengthens
into shade, into thoughtfulness,
and the sky deploys an argosy
of softly tinted clouds, fresh
blooms without stems
that sail where we cannot
go, all the way to the edge
of everything where daylight looks
back, once, then disappears.

George Amabile, 2010.

"And This Button Annihilates the City"


A & H Appliance advertisement, 1965.

We've looked at many advertisements that use the push-button future as a way to position products as cutting edge or innovative. But when the Future is used in this ad from the August 19, 1965 Marion Sentinel it just seems lazy.

Where is Father looking, and what --oh gawd, WHAT?-- will pushing those buttons do to that poor futuristic city? I think Daughter's been dipping into Mother's little helper, which would explain her crazy eyes, but doesn't explain why almost everyone is looking at a different point in space.
I guess the lesson here is that if you want to see the Future just look up and to your left. And leave your mouth slightly agape.
Oh, and shop at A & H Appliance.

Source: www.paleofuture.com

Monday, July 11, 2011

Dieter Rams: As Little Design as Possible



Pop artist Richard Hamilton once said of the work of Dieter Rams that it occupies “a place in my heart and consciousness that the Mont Sainte-Victoire did in Cézanne’s.” In thirty-five years as chief of design for German manufacturer Braun, Rams personally oversaw the development of more than five hundred products—primarily consumer electronics—that came to define the interior landscape of the late twentieth century. The black, stacked stereo console; the modular shelving system afloat on its slotted track; the unassuming electric razor (below) perfect to the hand: These were Rams’s gifts to modernity, imitated the world over, and by now so integral to our day-to-day lives that we scarcely notice them. Which effect was, as Phaidon’s new monograph reminds us, Rams’s very objective. As longtime company headman Erwin Braun was fond of saying, the firm’s appliances “should be humble servants, to be seen and heard as little as possible . . . like a valet in the old days.” It’s a quote that Rams was fond of citing, and Sophie Lovell, editor of Wallpaper*’s German edition, writes here that it’s an ethic embodied in the man himself: Rams’s “life, his thinking, and even his appearance” have been refined, like his designs, to an elegant quintessence. It’s not easy to coax such a self-effacing personage out of the corner. Early in Dieter Rams, its subject asks, “Why on earth do we need another book about me?” Lovell makes a good case in favor. Tracing Rams’s development, beginning with his early architectural training in the postwar neo-Bauhaus milieu, through his call to the as-yet-unidentified trade of industrial designer, Lovell lays out the complexities and collaborations that have yielded the designer’s most successful gizmos and fixtures. She gets an assist from Apple’s Jonathan Ive, whose iconic iPod is a shameless plagiarism of Braun’s T3 radio, and from scholar Klaus Klemp, who attempts to insert Rams, against the academic grain, back into the modernist canon. Rams’s unobtrusive excellence deserves to be celebrated, even if it blushes in the spotlight, especially as technology becomes more and more the armature of our lives. His true métier was never really functionalism, nor was it even interface (though that certainly will be his prime legacy to the digital age). Rather, the genius of Rams is located at the moment we stop using the product, step back from it. At that instant, his devices quietly withdraw from us, and we become once again integral to ourselves, free and serene.

Text by Ian Volner
Source:bookforum, Summer 2011.

The Building that went to the Mountain


Andreas Angelidakis,The Building that went to the Mountain, 2011

On Tuesday July 12th XYZ Outlet will hold the talk The Building that went to the Mountain (Demo of a study for a dying building) about the Casino Mont Parnes (located on Mount Parnitha, 30km north of Athens), based on an idea, and organised by architect Andreas Angelidakis. The talk will include the artist Kostis Velonis and the architects Agapi Proimou, Memos Phillipides, Giorgos Tzirtzilakis and Aristide Antonas.

XYZ Outlet #15: Andreas Angelidakis // The Building that went to the Mountain (Demo of a study for a dying building) // Tuesday 12 July // 20:00

Thursday, July 7, 2011

Misery and Splendor

Summoned by conscious recollection, she
would be smiling, they might be in a kitchen talking,
before or after dinner. But they are in this other room,
The window has many small panes, and they are on a couch
embracing. He holds her as tightly
as he can, she buries herself in his body.
Morning, maybe it is evening, light
is flowing through the room. Outside,
the day is slowly succeeded by night,
succeeded by day. The process wobbles wildly
and accelerates: weeks, months, years. The light in the
room
does not change, so it is plain what is happening.
They are trying to become one creature,
and something will not have it. They are tender
with each other, afraid
their brief, sharp cries will reconcile them to the moment
when they fall away again. So they rub against each other,
their mouths dry, then wet, then dry.
They feel themselves at the center of a powerful
and baffled will. They feel
they are an almost animal
washed up on the shore of a world--
or huddled up against the gate of a garden--
to which they can't admit they can never be admitted.

Robert Hass
from Human Wishes, 1989.

All Power to the Free Universities of the Future



Copenhagen Free University
ALL POWER TO THE FREE UNIVERSITIES OF THE FUTURE

The Copenhagen Free University was an attempt to reinvigorate the emancipatory aspect of research and learning, in the midst of an ongoing economization of all knowledge production in society. Seeing how education and research were being subsumed into an industry structured by a corporate way of thinking, we intended to bring the idea of the university back to life. By life, we mean the messy life people live within the contradictions of capitalism. We wanted to reconnect knowledge production, learning and skill sharing to the everyday within a self-organized institutional framework of a free university. Our intention was multi-layered and was of course partly utopian, but also practical and experimental. We turned our flat in Copenhagen into a university by the very simple act of declaring 'this is a university.' By this transformative speech act the domestic setting of our flat became a university. It didn't take any alterations to the architecture other than the small things needed in terms of having people in your home staying over, presenting thoughts, researching archival material, screening films, presenting documents and works of art. Our home became a public institution dedicated to the production process of communal knowledge and fluctuating desires.

The ethos of the CFU was critical and opinionated about the ideological nature of knowledge, which meant that we did not try to cover the institution in a cloud of dispassionate neutrality and transcendence as universities traditionally do. The Copenhagen Free University became a site of socialized and politicized research, developing knowledge and debate around certain fields of social practice. During its six years of existence, the CFU entered into five fields of research: feminist organization, art and economy, escape subjectivity, television/media activism and art history. The projects were initiated with the experience of the normative nature of mainstream knowledge production and research, allowing us to see how certain areas of critical practice were being excluded. Since we didn't wanted to replicate the structure of the formal universities, the way we developed the research was based on open calls to people who found interest in our fields or interest in our perspective on knowledge production. Slowly the research projects were collectively constructed through the display of material, presentations, meetings, and spending time together. The nature of the process was sharing and mutual empowerment, not focusing on a final product or paper, but rather on the process of communization and redistribution of facts and feelings. Parallel to the development of the CFU, we started to see self-organized universities sprouting up everywhere. Over this time, the basic question we were constantly asking ourselves was, what kind of university do we need in relation to our everyday? This question could only be answered in the concrete material conditions of our lives. The multiplicity of self-organized universities that were starting in various places, and which took all kinds of structures and directions, reflected the diversity of these material conditions. This showed that the neoliberal university model was only one model among many models; the only one given as a model to the students of capital.

As the strategy of self-institution focused on taking power and not accepting the dualism between the mainstream and the alternative, this in itself carried some contradictions. The CFU had for us become a too fixed identifier of a certain discourse relating to emancipatory education within academia and the art scene. Thus we decided to shut down the CFU in the winter of 2007 as a way of withdrawing the CFU from the landscape. We did this with the statement 'We Have Won' and shut the door of the CFU just before the New Year. During the six years of the CFU's existence, the knowledge economy had rapidly, and aggressively, become the norm around us in Copenhagen and in northern Europe. The rise of social networking, lifestyle and intellectual property as engines of valorization meant that the knowledge economy was expanding into the tiniest pores of our lives and social relations. The state had turned to a wholesale privatization of former public educational institutions, converting them into mines of raw material for industry in the shape of ideas, desires and human beings. But this normalizing process was somehow not powerful enough to silence all forms of critique and dissent; other measures were required.

In December 2010 we received a formal letter from the Ministry of Science, Technology and Innovation telling us that a new law had passed in the parliament that outlawed the existence of the Copenhagen Free University together with all other self-organized and free universities. The letter stated that they were fully aware of the fact that we do not exist any more, but just to make sure they wished to notify us that "In case the Copenhagen Free University should resume its educational activities it would be included under the prohibition in the university law §33″. In 2010 the university law in Denmark was changed, and the term 'university' could only be used by institutions authorized by the state. We were told that this was to protect 'the students from being disappointed.' As we know numerous people who are disappointed by the structural changes to the educational sector in recent years, we have decided to contest this new clampdown by opening a new free university in Copenhagen. This forms part of our insistence that the emancipatory perspective of education should still be on the map. We demand the law be scrapped or altered, allowing self-organized and free universities to be a part of a critical debate around the production of knowledge now and in the society of the future.

We call for everybody to establish their own free universities in their homes or in the workplace, in the square or in the wilderness. All power to the free universities of the future.

—The Free U Resistance Committee of June 18, 2011.

Source: www.edu-factory.org

Tuesday, July 5, 2011

Seated Brancusi



Seated Brancusi, 2011
143 x 50 x 34 cm
Wood, acrylic

What Is a Good Life?

Morality and Happiness

Plato and Aristotle treated morality as a genre of interpretation. They tried to show the true character of each of the main moral and political virtues (such as honor, civic responsibility, and justice), first by relating each to the others, and then to the broad ethical ideals their translators summarize as personal “happiness.” Here I use the terms “ethical” and “moral” in what might seem a special way. Moral standards prescribe how we ought to treat others; ethical standards, how we ought to live ourselves. The happiness that Plato and Aristotle evoked was to be achieved by living ethically; and this meant living according to independent moral principles.

We can—many people do—use either “ethical” or “moral” or both in a broader sense that erases this distinction, so that morality includes what I call ethics, and vice versa. But we would then have to recognize the distinction I draw in some other form in order to ask whether our ethical desire to lead good lives for ourselves provides a justifying moral reason for our concern with what we owe to others. Any of these different forms of expression would allow us to pursue the interesting idea that moral principles should be interpreted so that being moral makes us happy in the sense Plato and Aristotle meant.

In my book Justice for Hedgehogs—from which this essay is adapted—I try to pursue that interpretive project. We aim to find some ethical standard—some conception of what it is to live well—that will guide us in our interpretation of moral concepts. But there is an apparent obstacle. This strategy seems to suppose that we should understand our moral responsibilities in whatever way is best for us, but that goal seems contrary to the spirit of morality, because morality should not depend on any benefit that being moral might bring. We might try to meet this objection through a familiar philosophical distinction: we might distinguish between the content of moral principles, which must be categorical, and the justification of those principles, which might consistently appeal to the long-term interests of people bound by those principles.


We might argue, for example, that it is in everyone’s long-term interests to accept a principle that forbids lying even in circumstances when lying would be in the liar’s immediate interests. Everyone benefits when people accept a self-denial of that kind rather than each person lying when that is in his immediate interest. However, this maneuver seems unsatisfactory, because we do not believe that our reasons for being moral depend on even our long-term interests. We are, most of us, drawn to the more austere view that the justification and definition of moral principle should both be independent of our interests, even in the long term. Virtue should be its own reward; we need assume no other benefit in doing our duty.

But that austere view would set a severe limit to how far we could press an interpretive account of morality: it would permit the first stage I distinguished in Plato’s and Aristotle’s arguments, but not the second. We could seek integration of the ethical and moral within our distinctly moral convictions. We could list the concrete moral duties, responsibilities, and virtues we recognize and then try to bring these convictions into interpretive order—into a mutually reinforcing network of ideas defining our moral responsibilities. Perhaps we could find very general moral principles, like the utilitarian principle, that justify and are in turn justified by these concrete requirements and ideals. Or we could proceed in the other direction: setting out very general moral principles that we find appealing, and then seeing whether we can match these with the concrete convictions—and actions—we find we can approve. But we could not set the entire interpretive construction into any larger web of value; we could not justify or test our moral convictions by asking how well these serve other, different purposes or ambitions that people including ourselves might or should have.

That would be disappointing, because we need to find authenticity as well as integrity in our morality, and authenticity requires that we break out of distinctly moral considerations to ask what form of moral integrity fits best with the ethical decision about how we want to conceive our personality and our life. The austere view blocks that question. Of course it is unlikely that we will ever achieve a full integration of our moral, political, and ethical values that feels authentic and right. That is why living responsibly is a continuing project and never a completed task. But the wider the network of ideas we can explore, the further we can push that project.

The austere view that virtue should be its own reward is disappointing in another way. Philosophers ask why people should be moral. If we accept the austere view, then we can only answer: because morality requires this. That is not an obviously illegitimate answer. The web of justification is always finally, at its limits, circular, and it is not viciously circular to say that morality provides its own only justification, that we must be moral simply because that is what morality demands. But it is nevertheless sad to be forced to say this. Philosophers have pressed the question “why be moral?” because it seems odd to think that morality, which is often burdensome, has the force it does in our lives just because it is there, like an arduous and unpleasant mountain we must constantly climb but that we might hope wasn’t there or would somehow crumble. We want to think that morality connects with human purposes and ambitions in some less negative way, that it is not all constraint, with no positive value.

I therefore propose a different understanding of the irresistible thought that morality is categorical. We cannot justify a moral principle just by showing that following that principle would promote someone’s or everyone’s desires in either the short or the long term. The fact of desire—even enlightened desire, even a universal desire supposedly embedded in human nature—cannot justify a moral duty. So understood, our sense that morality need not serve our interests is only another application of Hume’s principle that no amount of empirical discovery about the state of the world can establish conclusions about moral obligation. My understanding of a proposal for combining ethics and morality does not rule out tying them together in the way Plato and Aristotle did, and in the way our own project proposes, because that project takes ethics to be, not a matter of psychological fact about what people happen to or even inevitably want or take to be in their own interest, but itself a matter of ideal.

We need, then, a statement of what we should take our personal goals to be that fits with and justifies our sense of what obligations, duties, and responsibilities we have to others. We look for a conception of living well that can guide our interpretation of moral concepts. But we want, as part of the same project, a conception of morality that can guide our interpretation of living well.

True, people confronted with other people’s suffering do not normally ask whether helping those people will create a more ideal life for themselves. They may be moved by the suffering itself or by a sense of duty. Philosophers debate whether this makes a difference. Should people help a child because the child needs help or because it is their duty to help? In fact both motives might well be in play along with hosts of others that a sophisticated psychological analysis might reveal, and it might be difficult or impossible to say which dominates on any particular occasion.

Nothing important, I believe, turns on the answer: doing what you take to be your duty because it is your duty is hardly disreputable. Nor is it culpably self- regarding to worry about the impact of behaving badly on the character of one’s life; it is not narcissistic to think, as people often say, “I couldn’t live with myself if I did that.” In any case, however, these questions of psychology and character are not relevant to the question that I am posing here. Our question is the different one of whether, when we try to fix, criticize, and ground our own moral responsibilities, we can sensibly assume that our ideas about what morality requires and about the best human ambitions for ourselves should reinforce each other.

Hobbes and Hume can each be read as claiming not just a psychological but an ethical basis for familiar moral principles. Hobbes’s putative ethics—that self-interest and therefore survival are the greatest good—is unsatisfactory. At least for most of us, just achieving survival through a morality of self-interest is not a sufficient condition of living well. Hume’s sensibilities, translated into an ethics, are much more agreeable, but experience teaches us that even people who are sensitive to the needs of others cannot resolve moral, or ethical, issues—as Hume’s theory might suggest—simply by asking themselves what they are naturally inclined to feel or do. Nor does it help much to expand Hume’s ethics into a general utilitarian principle. The idea that each of us should treat his own interests as no more important than those of anyone else has seemed an attractive basis for morality to many philosophers. But as I shall shortly argue, it can hardly serve as a strategy for living well oneself.

Religion can provide a justifying ethics for people who are religious in the right way; we have ample illustration of this in the familiar moralizing interpretations of sacred texts. Such people understand living well to mean respecting or pleasing a god, and they can interpret their moral responsibilities by asking which view of those responsibilities would best respect or most please that god. But that structure of thought could be helpful, as a guide to integrating ethics and morality, only for people who treat a sacred text as an explicit and detailed moral rule book. People who think only that their god has commanded love for and charity to others, as I believe many religious people do, cannot find, just in that command, any specific answers to what morality requires. In any case, I shall not rely on the idea of any divine book of detailed moral instruction here.
The Good Life and Living Well

If we reject Hobbesean and Humean views of ethics and are not tempted by religious ones, yet still propose to unite morality and ethics, we must find some other account of what living well means. As I said, it cannot mean simply having whatever one in fact wants: having a good life is a matter of our interests when they are viewed critically—the interests we should have. It is therefore a matter of judgment and controversy to determine what a good life is. But is it plausible to suppose that being moral is the best way to make one’s own life a good one? It is wildly implausible if we hold to popular conceptions of what morality requires and what makes a life good. Morality may require someone to pass up a job in cigarette advertising that would rescue him from poverty. In most people’s view he would lead a better life if he took the job and prospered.

Of course an interpretive account would not be limited by such conventional understandings. We might be able to construct a conception of a good life such that an immoral or base act would always, or almost always, make the agent’s life finally a worse life to lead. But I suspect that any such attempt would fail. Any attractive conception of our moral responsibilities would sometimes demand great sacrifices—it might require us to risk, or perhaps even to sacrifice, our lives. It is hard to believe that someone who has suffered such terrible misfortunes has had a better life than he would have had if he had acted immorally and then prospered in every way, creatively, emotionally, and materially, in a long and peaceful life.

We can, however, pursue a somewhat different, and I believe more promising, idea. This requires a distinction within ethics that is familiar in morals: a distinction between duty and consequence, between the right and the good. We should distinguish between living well and having a good life. These two different achievements are connected and distinguished in this way: living well means striving to create a good life, but only subject to certain constraints essential to human dignity. These two concepts, of living well and of having a good life, are interpretive concepts. Our ethical responsibility includes trying to find appropriate conceptions of both of them.

Each of these fundamental ethical ideals needs the other. We cannot explain the importance of a good life except by noticing how creating a good life contributes to living well. We are self-conscious animals who have drives, instincts, tastes, and preferences. There is no mystery why we should want to satisfy those drives and serve those tastes. But it can seem mysterious why we should want a life that is good in a more critical sense: a life we can take pride in having lived when the drives are slaked or even if they are not. We can explain this ambition only when we recognize that we have a responsibility to live well and believe that living well means creating a life that is not simply pleasurable but good in that critical way.

You might ask: responsibility to whom? It is misleading to answer: responsibility to ourselves. People to whom responsibilities are owed can normally release those who are responsible, but we cannot release ourselves from our responsibility to live well. We must instead acknowledge an idea that I believe we almost all accept in the way we live but that is rarely explicitly formulated or acknowledged. We are charged to live well by the bare fact of our existence as self-conscious creatures with lives to lead. We are charged in the way we are charged by the value of anything entrusted to our care. It is important that we live well; not important just to us or to anyone else, but just important.

We have a responsibility to live well, and the importance of living well accounts for the value of having a critically good life. These are no doubt controversial ethical judgments. I also make controversial ethical judgments in any view I take about which lives are good or well-lived. In my own view, someone who leads a boring, conventional life without close friendships or challenges or achievements, marking time to his grave, has not had a good life, even if he thinks he has and even if he has thoroughly enjoyed the life he has had. If you agree, we cannot explain why he should regret this simply by calling attention to pleasures missed: there may have been no pleasures missed, and in any case there is nothing to miss now. We must suppose that he has failed at something: failed in his responsibilities for living.

What kind of value can living well have? The analogy between art and life has often been drawn and as often ridiculed. We should live our lives, the Romantics said, as a work of art.1 We distrust the analogy now because it sounds too Wildean, as if the qualities we value in a painting—fine sensibility or a complex formal organization or a subtle interpretation of art’s own history—were the values we should seek in life: the values of the aesthete. These may be poor values to seek in the way we live. But to condemn the analogy for that reason misses its point, which lies in the relation between the value of what is created and the value of the acts of creating it.

We value great art most fundamentally not because the art as product enhances our lives but because it embodies a performance, a rising to artistic challenge. We value human lives well lived not for the completed narrative, as if fiction would do as well, but because they too embody a performance: a rising to the challenge of having a life to lead. The final value of our lives is adverbial, not adjectival—a matter of how we actually lived, not of a label applied to the final result. It is the value of the performance, not anything that is left when the performance is subtracted. It is the value of a brilliant dance or dive when the memories have faded and the ripples died away.

We need another distinction. Something’s “product value” is the value it has just as an object, independently of the process through which it was created or of any other feature of its history. A painting may have product value, and this may be subjective or objective. Its formal arrangements may be beautiful, which gives it objective value, and it may give pleasure to viewers and be prized by collectors, which properties give it subjective value. A perfect mechanical replica of that painting has the same beauty. Whether it has the same subjective value depends largely on whether it is known to be a replica: it has as great subjective value as the original for those who think that it is the original. The original has a kind of objective value that the replica cannot have, however: it has the value of having been manufactured through a creative act that has performance value. It was created by an artist intending to create art. The object—the work of art—is wonderful because it is the upshot of a wonderful performance; it would not be as wonderful if it were a mechanical replica or if it had been created by some freakish accident.

It was once popular to laugh at abstract art by supposing that it could have been painted by a chimpanzee, and people once speculated whether one of billions of apes typing randomly might produce King Lear. If a chimpanzee by accident painted Blue Poles or typed the words of King Lear in the right order, these products would no doubt have very great subjective value. Many people would be desperate to own or anxious to see them. But they would have no value as performance at all. Performance value may exist independently of any object with which that performance value has been fused. There is no product value left when a great painting has been destroyed, but the fact of its creation remains and retains its full performance value. Uccello’s achievements are no less valuable because his paintings were gravely damaged in the Florence flood; Leonardo’s Last Supper might have perished, but the wonder of its creation would not have been diminished. A musical performance or a ballet may have enormous objective value, but if it has not been recorded or filmed, its product value immediately diminishes. Some performances—improvisational theater and unrecorded jazz concerts—find value in their ephemeral singularity: they will never be repeated.

We may count a life’s positive impact—the way the world itself is better because that life was lived—as its product value. Aristotle thought that a good life is one spent in contemplation, exercising reason, and acquiring knowledge; Plato that the good life is a harmonious life achieved through order and balance. Neither of these ancient ideas requires that a wonderful life have any impact at all. Most people’s opinions, so far as these are self-conscious and articulate, ignore impact in the same way. Many of them think that a life devoted to the love of a god or gods is the finest life to lead, and a great many including many who do not share that opinion think the same of a life lived in inherited traditions and steeped in the satisfactions of conviviality, friendship, and family. All these lives have, for most people who want them, subjective value: they bring satisfaction. But so far as we think them objectively good—so far as it would make sense to want to find satisfaction in such lives—it is the performance rather than the product value of living that way that counts.

Philosophers used to speculate about what they called the meaning of life. (That is now the job of mystics and comedians.) It is difficult to find enough product value in most people’s lives to suppose that they have meaning through their impact. Yes, but if it were not for some lives, penicillin would not have been discovered so soon and King Lear would never have been written. Still, if we measure a life’s value by its consequence, all but a few lives would have no value, and the great value of some other lives—of a carpenter who pounded nails into a playhouse on the Thames—would be only accidental. On any plausible view of what is truly wonderful in almost any human life, impact hardly comes into the story at all.

If we want to make sense of a life having meaning, we must take up the Romantics’ analogy. We find it natural to say that an artist gives meaning to his raw materials and that a pianist gives fresh meaning to what he plays. We can think of living well as giving meaning—ethical meaning, if we want a name—to a life. That is the only kind of meaning in life that can stand up to the fact and fear of death. Does all that strike you as silly? Just sentimental? When you do something smaller well—play a tune or a part or a hand, throw a curve or a compliment, make a chair or a sonnet or love—your satisfaction is complete in itself. Those are achievements within life. Why can’t a life also be an achievement complete in itself, with its own value in the art in living it displays?

One qualification. I said that living well includes striving for a good life, but that is not necessarily a matter of minimizing the chances of a bad one. In fact many traits of character we value are not best calculated to produce what we independently judge to be the best available life. We value spontaneity, style, authenticity, and daring: setting oneself difficult or even impossible projects. We might be tempted to collapse the two ideas by saying that developing and exercising these traits and virtues are part of what makes a life good.


Georges Seurat: Study for ‘A Sunday on La Grande Jatte,’ 1884

But that seems too reductive. If we know that someone now in poverty courted that poverty by choosing an ambitious but risky career, we may well think that he was right to run that risk. He may have done a better job of living by striving for an unlikely but magnificent success. An artist who could be comfortably admired and prosperous—Seurat, if a name helps—strikes out in an entirely new direction that will isolate and impoverish him, requires immersion in his work to the cost of his marriage and friendships, and may well not succeed even artistically. If it does succeed, moreover, the success is unlikely to be recognized, as in Seurat’s case, until after his death. We may want to say: if he pulls it off, he will have had a better life, even taking account of the terrible costs, than if he had not tried, because even an unrecognized great achievement makes a life a good one.

But suppose it doesn’t come off; what he produces, though novel, is of less merit than the more conventional work he would otherwise have painted. We might think, if we value daring very highly as a virtue, that even in retrospect he made the right choice. It didn’t work out, and his life was worse than if he had never tried. But he was right, all things ethically considered, to try. This is, I agree, an outré example: starving geniuses make good philosophical copy, but they are not thick on the ground. We can replicate the example in a hundred more commonplace ways, however—entrepreneurs pursuing risky but dramatic inventions, for instance, or skiers pressing the envelope of danger. But whether we are ourselves drawn to think that living well sometimes means choosing what is likely to be a worse life, we must recognize the possibility that it does. Living well is not the same as maximizing the chance of producing the best possible life. The complexity of ethics matches the complexity of morality.

1 Oscar Wilde, for example: "It is through Art, and through Art only, that we can realize our perfection; through Art, and through Art only, that we can shield ourselves from the sordid perils of actual existence." And: "All that I desire to point out is the general principle that Life imitates Art far more than Art imitates Life." John Keats: "A man's life of any worth is a continual allegory." Friedrich Nietzsche: "Art represents the highest task and the truly metaphysical activity of this life."

Text by Ronald Dworkin
The New York Review of Books,10 Feb 2011
Source:www.nybooks.com

Metonymy as an Approach to a Real World

Whether what we sense of this world
is the what of this world only, or the what
of which of several possible worlds
--which what?--something of what we sense
may be true, may be the world, what it is, what we sense.
For the rest, a truce is possible, the tolerance
of travelers, eating foreign foods, trying words
that twist the tongue, to feel that time and place,
not thinking that this is the real world.

Conceded, that all the clocks tell local time;
conceded, that "here" is anywhere we bound
and fill a space; conceded, we make a world:
is something caught there, contained there,
something real, something which we can sense?
Once in a city blocked and filled, I saw
the light lie in the deep chasm of a street,
palpable and blue, as though it had drifted in
from say, the sea, a purity of space.

William Bronk