As a writer, László Krasznahorkai has accurately, I think, been compared to Kafka, Gogol, Bulgakov and Beckett. To that list I might add Maxim Gorky, whose play, The Lower Depths, also paints an unflattering portrait of the bottom classes. The novel is set in rural Hungary on a failed communal "estate" during the communist period. It begins in late October as the farm laborers are being paid their shares from the proceeds of their summer work and the inhabitants are settling in for the winter.
Most of the characters are leading abject lives and would rather live elsewhere, but they have nowhere to go. They entertain themselves by spying on their neighbors, and most of the men lust after Mrs. Schmidt, who sometimes accommodates them. During the winter months the residents have nothing to do except drink. In the worst family, the Horgoses, the two elder daughters are prostitutes, the young son is an unsupervised juvenile delinquent, and the young daughter, Esti, is learning disabled. In the saddest chapter of the novel, which I found difficult to read, Esti, who is mistreated by her mother and brother after being rejected by the special school to which she had been sent, kills herself with rat poison. The setting is utterly depressing, with continuous rain, mud everywhere and constant bickering. All of the buildings are falling apart, with windows and doors that don't close; some rooms have weeds growing in them and others are full of spiders.
The novel revolves around the return of Irimiás and his assistant, Petrina. Irimiás had been credited with bringing success to the estate a few years earlier but had inexplicably disappeared along with Petrina, and both were thought dead. When word gets out that the two were seen on the road heading toward the estate, the inhabitants become ecstatic and party all night – hence the tango – hoping that Irimiás will set things straight again. Irimiás agrees to help them by employing them at a new project a few miles away, but shortly after they arrive there they learn that the deal, if there had really been one, has fallen through, and that he has arranged to break them up and send them off to new positions that he's found for them at various other locations. They become part of a mysterious plan that he is presenting to the authorities, perhaps to form some sort of spy network under his direction. One of the most humorous sections in the novel describes the attempt by bureaucrats to rephrase Irimiás's candid, colorful descriptions of the people in order to create a proper document for the project, whatever it is.
There are structurally interesting aspects to the novel. In places, the same scene is recounted several times, each from the point of view of a different character. I didn't notice it initially, but the chapter numbers go from one to six and then backwards from six to one, rather than from one through twelve. Standing above the fray for the first few chapters is a reclusive, obese and dissolute doctor who observes the other inhabitants from a window in his run-down house and takes copious notes. At the end of the novel, when most of his neighbors have moved away, he decides to write a novel instead, which, of course, is Satantango. Thus, the novel turns back on itself, like a serpent eating its tail or the hands drawing themselves in the M.C. Escher lithograph.
The main tension of the novel has to do with whether Irimiás represents good or evil: is he Satan? Certainly he has more intelligence and cunning than the others, yet he is also highly secretive, and one can never know his true motives. Some of the characters are religious or superstitious, and events occur, such as the unexplained sound of a ringing bell, that they interpret as signs or omens. However, in the course of the novel, Irimiás plainly states that he is an atheist, and any hints of the supernatural are explained by natural phenomena. If anything, Irimiás is an offhand force for good who faces obstacles and limitations of his own that impede his ability to meet his goals: he too leads a troubled life, but is somewhat more competent in his execution of it than the others.
What interested me about the novel, aside from its successful artistry in both language and form, is its perceptiveness of human nature. It reminded me of the printing plant where I once worked in Dixon, Illinois. Some years before I arrived there it had been a failing company that was poorly managed by the son of the local newspaper scion. In order to save money on heating they used to turn off the heat on weekends, and then on Monday mornings during the winter it would take several hours to start up the presses because the ink was all solid. The plant was built on the previous town circus site. Eventually the facility was sold to a Chicago-area businessman who hired a new president to run the operation. That person, Larry Dussair, was the most effective executive I ever met, and, like Irimiás, he was excellent at manipulating people, and you never knew what he really thought. He even had his office set up such that his desk and chair were elevated, so when you spoke to him he looked down on you. One of his interview techniques was to suddenly stop talking without explanation and then watch you squirm in silence while he stared intensely at you. In hindsight, Larry just knew how to get things done, and he had developed an arsenal of techniques to that end; sometimes he seemed above the law and sinister, but in the end he was just an ordinary but talented person working very hard to succeed at his job. The reason why I mention Larry is that he had the same effect on some of the local people in Dixon that Irimiás had on the people on the estate. They attributed an almost supernatural power to him, and I recall one employee telling me that someone she knew who visited the plant had sensed a strong power there. Thus, superstition isn't restricted to the likes of rural Hungary.
Although, as I said, in some respects the book is highly depressing, it is still rather refreshing to me, given that literary fiction in this part of the world tends to be contrived in order to fit current literary fashions – such as those favoring identity politics. For example, rather than depicting Mrs. Horgos as stupid, lazy and irresponsible, as Krasznahorkai does, here she would have to be described as an innocent victim of prejudice and discrimination whose cultural values have been unjustly denigrated. In the U.S., calling anyone stupid, lazy and irresponsible is almost tantamount to a hate crime. Ideally, I prefer a balanced portrait of people that shows all sides of their personalities – which are there whether or not writers choose to acknowledge them.
As to whether or not I recommend the novel, it depends on who you are and what kind of fiction you like to read. If you are unfamiliar with the authors mentioned in the first paragraph and read mainly for light entertainment, you may as well skip it. However, if you have a serious interest in literature as an art form, Satantango is de rigeur; I think that it may well be one of the most important novels of the twentieth century and that Krasznahorkai may deservedly win a Nobel one day. Although my natural preference for novels lies with ones about educated and intelligent people, Satantango is a fine work of art, and I'm glad to have read it.
Sunday, February 28, 2016
Sunday, February 21, 2016
Diary
We finally got a small dose of winter, with a temperature of minus eighteen, but that was short-lived, and we're back to warm temperatures and no snow, with the exception of the mountain elevations, where it's slightly colder (these aren't exactly the Himalayas). Winters are bad for stargazing here, and this one has been no exception, but Jupiter is back and the Great Red Spot, the storm more than twice the size of Earth, is said to be more distinct this year; I'll be viewing it whenever the skies clear. I've started to read Satantango and am enjoying it so far, but will withhold comment until I've finished, because, unlike The Mandarins, it is a story with twists and turns that reads like a true novel as it unfolds; the meaning or lack thereof may not emerge until the end, whereas in more realistic fiction you always have some sense of where you are at each point in the narrative. For now I'll just say that the quality of the prose is quite high – higher than that of any of the other living writers I've mentioned.
I seem to be benefiting from my Internet detox program. There is a hysterical addictive quality to the Internet that younger people may be unable to escape, which explains why they fall victim to the pathologies described by Sherry Turkle. One of the advantages of old age is that it provides awareness of models of cognitive self-preservation that younger people may never have been exposed to. With a little self-discipline I am now able to avoid sites that might just draw me in, waste my time and finally irritate me without providing any noticeable benefits. I do feel as if I'm missing out on a few things, but have enough experience and confidence to know that this is the saner course. My switch to reading printed materials isn't working perfectly, though it is an improvement. The advantage of deriving more from reading books and magazines is partially offset by the fact that some of them are always deficient in content. Of the things I currently read, Archaeology is a bit light and the stargazing magazines are too devoted to advertising, though still useful for astronomical news. Nautilus has better-written and more informative science articles than most – and little advertising. For some reason I'm still receiving Nature, and I don't mind that much as long as I'm not paying the $199 per year subscription fee. The literary magazines, Boston Review and Bookforum, are so alien to me that I'm going to let my subscriptions drop. Somehow I have developed a gag reaction to American literary culture and it is best to avoid it entirely in all forms; I am able to find suitable literary reading beyond the Anglo-American world, but only with great effort.
It seems to me that contemporary culture seriously undervalues the benefits of experience, particularly the accumulated experience of thoughtful people, who always seem to be in a minority. I have been thinking about my father's experience, and how I am now fifteen years older than he was when he died. When I see him in photographs taken not long before he died he looks young to me, and I can't help but think that I know more than he did. The transition from dependence on parents to independence and the reaching of a mature understanding of the world, is, I think, one of the major processes of human life. I have little sympathy for adult children who ceaselessly blame their parents for deficiencies in their upbringing, because everyone is dealt a hand that is bound to be problematic in one way or another, and few of us get optimal guidance and preparation before setting off.
I seem to be benefiting from my Internet detox program. There is a hysterical addictive quality to the Internet that younger people may be unable to escape, which explains why they fall victim to the pathologies described by Sherry Turkle. One of the advantages of old age is that it provides awareness of models of cognitive self-preservation that younger people may never have been exposed to. With a little self-discipline I am now able to avoid sites that might just draw me in, waste my time and finally irritate me without providing any noticeable benefits. I do feel as if I'm missing out on a few things, but have enough experience and confidence to know that this is the saner course. My switch to reading printed materials isn't working perfectly, though it is an improvement. The advantage of deriving more from reading books and magazines is partially offset by the fact that some of them are always deficient in content. Of the things I currently read, Archaeology is a bit light and the stargazing magazines are too devoted to advertising, though still useful for astronomical news. Nautilus has better-written and more informative science articles than most – and little advertising. For some reason I'm still receiving Nature, and I don't mind that much as long as I'm not paying the $199 per year subscription fee. The literary magazines, Boston Review and Bookforum, are so alien to me that I'm going to let my subscriptions drop. Somehow I have developed a gag reaction to American literary culture and it is best to avoid it entirely in all forms; I am able to find suitable literary reading beyond the Anglo-American world, but only with great effort.
It seems to me that contemporary culture seriously undervalues the benefits of experience, particularly the accumulated experience of thoughtful people, who always seem to be in a minority. I have been thinking about my father's experience, and how I am now fifteen years older than he was when he died. When I see him in photographs taken not long before he died he looks young to me, and I can't help but think that I know more than he did. The transition from dependence on parents to independence and the reaching of a mature understanding of the world, is, I think, one of the major processes of human life. I have little sympathy for adult children who ceaselessly blame their parents for deficiencies in their upbringing, because everyone is dealt a hand that is bound to be problematic in one way or another, and few of us get optimal guidance and preparation before setting off.
Tuesday, February 16, 2016
Convenient Fictions
With the death of Antonin Scalia, I return to my long meditation on how some highly intelligent people are not immune to errors in their thinking when held to simple empirical standards. I'm not referring to constitutional law, in which I have no expertise, but rather to the disappointing similarity of Scalia, who was a devout Roman Catholic, to Marilynne Robinson and W.H. Auden, among others, with respect to his religious faith. Though, strictly speaking, agnosticism is the appropriate position on God given that there is no empirical evidence for the existence of such a being and that there remains the possibility that God exists but is undetected or undetectable, my view is that atheism is the more appropriate position to adopt in connection with the Abrahamic religions, since, besides having no empirical basis, there are compelling anthropological explanations for them that do not require the existence of an actual divine being.
I think that religions are precipitated by our social and psychological needs, and that these are the primary reasons why we have them. When I studied Greek mythology in college, I was surprised to find it more interesting than Christianity while serving the same functions. In the absence of knowledge about lightning, why not say that Zeus is angry about something or other and hurled a lightning bolt? The Greek gods were understandable because they resembled humans, had similar emotional baggage and additionally had the ability to help or hinder us behind the scenes: they played favorites. The Greek gods were everywhere, making daily living an exercise in magical realism, while Christianity is comparatively boring. You are supposed to believe some guy who may or may not have lived two thousand years ago and supposedly said that there is only one God, with whom he happened to have a connection. Then, six hundred years later, along comes Muhammad, who also claimed to have an inside track to God, whom he called Allah. The Christians didn't recognize Muhammad or Allah, the Muslims didn't recognize Jesus Christ or God, and war ensued intermittently for a thousand years. That was only the beginning of the confusion, because before long each religion began to splinter into different sects. Christianity divided into Eastern and Western branches, and then along came the Reformation. Islam split into Sunni and Shiite branches. Both religions continue to generate new sects, of which al-Qaeda and ISIL are among the more recent. Ideologically, the links between the original tenets and the latest iterations become increasingly tenuous.
There are many good reasons to adopt a religion, but adopting one makes the most sense when one has grown up with it and it permeates one's culture. At a minimum, being, for example, Christian, may permit you to set aside a host of questions and doubts, enabling you to focus on other aspects of your life that more directly influence your well-being. Instead of wasting years trying to understand the universe, which you would probably never succeed at anyway, you can file away that topic under "God" and, for example, go to law school and become a Supreme Court justice. If you are Antonin Scalia, your family will be very proud of you; you will also make a lot of money and have nine children, fulfilling Darwin's prophecy, if not God's. For someone like Scalia, there may be no penalty for adopting a religion, but negative consequences may be created for others. For example, Scalia seems to have believed in a version of American exceptionalism that contains traces of the idea that the U.S. is favored by God, a notion that I consider ludicrous. It is plausible that Scalia was an originalist because he considered the Constitution a divine document. If Scalia had taken a different path and chosen to become, say, a nihilist, he may well have died young, friendless and childless, and that is why I think adopting a popular religion usually has evolutionary benefits.
For me, people like Scalia are due credit for their contributions to society, but if you're going to be a purist about their ideas you can't ignore the fact that they fall seriously short. They do well in their lives by conforming to a fiction that has no basis in reality. On broad questions about the universe, I think it is more honest to say that you don't know, and that mankind may never know. My view is that we are all finite creatures whose brains were not evolved to answer such questions, and try as we might we are bound to fail. That may apply equally to science; even if science eventually explains all of the workings of the universe to our satisfaction, it will still be a limited enterprise. In the best case scenario science may provide an accurate, mathematically sound model that meets all of our needs as organisms, but the models themselves will be limited by our inherent limitations as biological entities. For example, scientists often remark that they find the beauty and symmetry of mathematics in nature, but that may be only because that is our favored way of understanding; there may be deeper, more elusive ways of understanding that are completely beyond our comprehension. For this reason, rather than worship at the altar of religion or science, I am content to say "I don't know" and accept the true mystery in which we all live.
I think that religions are precipitated by our social and psychological needs, and that these are the primary reasons why we have them. When I studied Greek mythology in college, I was surprised to find it more interesting than Christianity while serving the same functions. In the absence of knowledge about lightning, why not say that Zeus is angry about something or other and hurled a lightning bolt? The Greek gods were understandable because they resembled humans, had similar emotional baggage and additionally had the ability to help or hinder us behind the scenes: they played favorites. The Greek gods were everywhere, making daily living an exercise in magical realism, while Christianity is comparatively boring. You are supposed to believe some guy who may or may not have lived two thousand years ago and supposedly said that there is only one God, with whom he happened to have a connection. Then, six hundred years later, along comes Muhammad, who also claimed to have an inside track to God, whom he called Allah. The Christians didn't recognize Muhammad or Allah, the Muslims didn't recognize Jesus Christ or God, and war ensued intermittently for a thousand years. That was only the beginning of the confusion, because before long each religion began to splinter into different sects. Christianity divided into Eastern and Western branches, and then along came the Reformation. Islam split into Sunni and Shiite branches. Both religions continue to generate new sects, of which al-Qaeda and ISIL are among the more recent. Ideologically, the links between the original tenets and the latest iterations become increasingly tenuous.
There are many good reasons to adopt a religion, but adopting one makes the most sense when one has grown up with it and it permeates one's culture. At a minimum, being, for example, Christian, may permit you to set aside a host of questions and doubts, enabling you to focus on other aspects of your life that more directly influence your well-being. Instead of wasting years trying to understand the universe, which you would probably never succeed at anyway, you can file away that topic under "God" and, for example, go to law school and become a Supreme Court justice. If you are Antonin Scalia, your family will be very proud of you; you will also make a lot of money and have nine children, fulfilling Darwin's prophecy, if not God's. For someone like Scalia, there may be no penalty for adopting a religion, but negative consequences may be created for others. For example, Scalia seems to have believed in a version of American exceptionalism that contains traces of the idea that the U.S. is favored by God, a notion that I consider ludicrous. It is plausible that Scalia was an originalist because he considered the Constitution a divine document. If Scalia had taken a different path and chosen to become, say, a nihilist, he may well have died young, friendless and childless, and that is why I think adopting a popular religion usually has evolutionary benefits.
For me, people like Scalia are due credit for their contributions to society, but if you're going to be a purist about their ideas you can't ignore the fact that they fall seriously short. They do well in their lives by conforming to a fiction that has no basis in reality. On broad questions about the universe, I think it is more honest to say that you don't know, and that mankind may never know. My view is that we are all finite creatures whose brains were not evolved to answer such questions, and try as we might we are bound to fail. That may apply equally to science; even if science eventually explains all of the workings of the universe to our satisfaction, it will still be a limited enterprise. In the best case scenario science may provide an accurate, mathematically sound model that meets all of our needs as organisms, but the models themselves will be limited by our inherent limitations as biological entities. For example, scientists often remark that they find the beauty and symmetry of mathematics in nature, but that may be only because that is our favored way of understanding; there may be deeper, more elusive ways of understanding that are completely beyond our comprehension. For this reason, rather than worship at the altar of religion or science, I am content to say "I don't know" and accept the true mystery in which we all live.
Friday, February 12, 2016
Automated Art
A new economics book, The Rise and Fall of American Growth, by Robert Gordon, analyzes productivity growth in the U.S. Economists associate productivity growth closely with the long-term standard of living in a country, and Gordon finds that U.S. productivity growth peaked in the 1950's and is unlikely ever to reach that level again or even come close. Gordon believes that there will be almost no growth in household incomes through 2040. This flies in the face of rhetoric in the media and politics proclaiming that with the right political leaders and policies in place the U.S. could return to 1950's-like conditions in which nearly everyone lives well. I don't think I'll read the book, because its thesis seems rather obvious to me, but I thought I'd mention it because it relates to my thoughts about technology's effects on how people earn their livings. There doesn't seem to be much research underway on what people are likely to be doing for work in fifty years, and it seems that the work environment, if it still exists then, will be radically different due to advances in technology.
The technology-related changes during my career in printing were enormous. As of the late 1970's, most print preparation involved pasting set type onto a board in the correct position and then photographing it. Black and white photographs were manually converted into halftones by placing a screen over the negative and photographing it again. Color photographs were once manually separated into four colors with filters on a large camera, and later a color transparency was electronically separated using a scanner, which was quite expensive in the 1980's. Once all of the image components were in negative form, a stripper would place them in the correct positions and manually align all four color negatives. These "flats" were then placed in step-and-repeat machines to make lithographic printing plates. Stripping and platemaking departments used to employ a relatively large number of people, but now, with the electronic processing of images, stripping is obsolete, and a sizeable plant can operate with just one platemaker per shift. Similarly, improved electronic controls on printing presses have made it possible to operate them with smaller crews, and newer presses run much faster than earlier presses, reducing the amount of labor per sheet. On top of this, Internet advertising has been cutting into print sales. The net result of new technology is the disappearance of thousands of jobs in a once thriving industry.
I bring this up because the same phenomenon has occurred in all manufacturing industries and has been spreading to white-collar jobs for some time. With advances in AI it is easy to imagine a state in which one doctor, lawyer or engineer will do the work that twenty once did. Some manual labor and customer service jobs may survive for a few more years, because low wages obviate the need for expensive technology, but if the cost of technology falls low enough, those jobs could go too. I like thought experiments and speculate that some of the last vocations to fall may be in the arts. This will come about because technology will be able to mimic human behavior and skills even if we never arrive at a singularity. Though I think something resembling a singularity will probably occur, that would not be necessary for reconfiguring art as we know it; it could happen before then if machines began to pass the Turing test with the art they produced. All this means is that people would be unable to distinguish a painting, sculpture, novel, poem, film, musical composition, etc., created by a machine from one created by a human.
You can see the early beginnings of this transition in the algorithms currently used to determine what films a Netflix subscriber would enjoy. Despite the fact that Netflix's recommendations for me are almost always wrong, they seem to have identified some of the characteristics that I like in a film. For instance, they know that I like suspenseful psychological movies, foreign movies, thrillers, understated movies and cerebral movies. That isn't much, but it's a start. Why couldn't a sophisticated computer analyze a large body of art, identify the characteristics that appeal to people and then formulate new works based on human preferences? Several kinds of fiction-writing software already exist, and there is no reason to assume that a completely automated novel that satisfies most readers couldn't be produced in the future. With the right software, a computer could sift through a large database to identify themes and subplots that appeal to people and generate writing styles that would be agreeable to specific groups. The result would not necessarily be robotic, because the program could be calibrated to insert random events that simulate events in novels written by humans. As a matter of fact, it is soothing, repetitive qualities that draw many readers to fiction, and repetitive qualities are the ones that would be easiest to detect through analysis.
One art that would be particularly amenable to automated creation would be painting. Ever since Warhol, the critical standards for what constitutes a good painting have fallen frighteningly low. There is little reason to suppose that an extraordinary number of mediocre paintings couldn't be deemed brilliant and fetch large sums if promoted in the right way by the right people. Capitalism has infiltrated the art world to such a degree that aesthetic values are often predefined by the financial interests of members of the relevant art community and never subjected to the judgment of independent experts who have no conflicting interests. For example, it might be possible to create what looks like a Warhol painting and sell it for a million dollars if it were marketed properly by insiders of the art world. We usually think of artistic value as a private experience, often in terms of the beauty of an object, but art's commodification in recent decades seems to have redefined it so as to place an emphasis on market value as the dominant factor.
Perhaps the fundamental question here is whether creativity, vision and skill can be automated. Certainly skill can be, as machines are already capable of performing with greater physical precision than humans are capable in some areas. With the current state of technology, creativity and vision can be simulated, perhaps convincingly enough to fool some people as to their authenticity, but without escaping the notice of more sophisticated observers. However, if you believe, as I do, that humans are not the ne plus ultra of the universe, an allowance must be made for the possibility that our best art could one day be matched or surpassed by superintelligence.
The technology-related changes during my career in printing were enormous. As of the late 1970's, most print preparation involved pasting set type onto a board in the correct position and then photographing it. Black and white photographs were manually converted into halftones by placing a screen over the negative and photographing it again. Color photographs were once manually separated into four colors with filters on a large camera, and later a color transparency was electronically separated using a scanner, which was quite expensive in the 1980's. Once all of the image components were in negative form, a stripper would place them in the correct positions and manually align all four color negatives. These "flats" were then placed in step-and-repeat machines to make lithographic printing plates. Stripping and platemaking departments used to employ a relatively large number of people, but now, with the electronic processing of images, stripping is obsolete, and a sizeable plant can operate with just one platemaker per shift. Similarly, improved electronic controls on printing presses have made it possible to operate them with smaller crews, and newer presses run much faster than earlier presses, reducing the amount of labor per sheet. On top of this, Internet advertising has been cutting into print sales. The net result of new technology is the disappearance of thousands of jobs in a once thriving industry.
I bring this up because the same phenomenon has occurred in all manufacturing industries and has been spreading to white-collar jobs for some time. With advances in AI it is easy to imagine a state in which one doctor, lawyer or engineer will do the work that twenty once did. Some manual labor and customer service jobs may survive for a few more years, because low wages obviate the need for expensive technology, but if the cost of technology falls low enough, those jobs could go too. I like thought experiments and speculate that some of the last vocations to fall may be in the arts. This will come about because technology will be able to mimic human behavior and skills even if we never arrive at a singularity. Though I think something resembling a singularity will probably occur, that would not be necessary for reconfiguring art as we know it; it could happen before then if machines began to pass the Turing test with the art they produced. All this means is that people would be unable to distinguish a painting, sculpture, novel, poem, film, musical composition, etc., created by a machine from one created by a human.
You can see the early beginnings of this transition in the algorithms currently used to determine what films a Netflix subscriber would enjoy. Despite the fact that Netflix's recommendations for me are almost always wrong, they seem to have identified some of the characteristics that I like in a film. For instance, they know that I like suspenseful psychological movies, foreign movies, thrillers, understated movies and cerebral movies. That isn't much, but it's a start. Why couldn't a sophisticated computer analyze a large body of art, identify the characteristics that appeal to people and then formulate new works based on human preferences? Several kinds of fiction-writing software already exist, and there is no reason to assume that a completely automated novel that satisfies most readers couldn't be produced in the future. With the right software, a computer could sift through a large database to identify themes and subplots that appeal to people and generate writing styles that would be agreeable to specific groups. The result would not necessarily be robotic, because the program could be calibrated to insert random events that simulate events in novels written by humans. As a matter of fact, it is soothing, repetitive qualities that draw many readers to fiction, and repetitive qualities are the ones that would be easiest to detect through analysis.
One art that would be particularly amenable to automated creation would be painting. Ever since Warhol, the critical standards for what constitutes a good painting have fallen frighteningly low. There is little reason to suppose that an extraordinary number of mediocre paintings couldn't be deemed brilliant and fetch large sums if promoted in the right way by the right people. Capitalism has infiltrated the art world to such a degree that aesthetic values are often predefined by the financial interests of members of the relevant art community and never subjected to the judgment of independent experts who have no conflicting interests. For example, it might be possible to create what looks like a Warhol painting and sell it for a million dollars if it were marketed properly by insiders of the art world. We usually think of artistic value as a private experience, often in terms of the beauty of an object, but art's commodification in recent decades seems to have redefined it so as to place an emphasis on market value as the dominant factor.
Perhaps the fundamental question here is whether creativity, vision and skill can be automated. Certainly skill can be, as machines are already capable of performing with greater physical precision than humans are capable in some areas. With the current state of technology, creativity and vision can be simulated, perhaps convincingly enough to fool some people as to their authenticity, but without escaping the notice of more sophisticated observers. However, if you believe, as I do, that humans are not the ne plus ultra of the universe, an allowance must be made for the possibility that our best art could one day be matched or surpassed by superintelligence.
Tuesday, February 9, 2016
Diary
This winter in Vermont has been dull from a weather standpoint. We have had very little snow and the temperature hasn't even been down to zero yet. I didn't turn on the heat until December 29. Without snow, the landscape here looks dead and ugly. Fortunately the forecast calls for more snow and lower temperatures. I prefer a certain amount of adversity and have not been getting much of it recently.
We just began to watch the popular Netflix series Making a Murderer. It concerns the legal battles of Steven Avery, whose family owns an auto salvage business in Manitowoc County, Wisconsin. Avery was wrongfully convicted of sexual assault in 1985 and spent 18 years in prison before being exonerated in 2003. Two years after his release, Avery was charged with the murder of a female photographer, and in 2007 he was found guilty and sentenced to life without parole. As far as we've watched, the documentary focuses on Avery and his family and the improper investigations conducted by the Manitowoc police. Avery's nephew, Brendan Dassey, was also implicated in the murder and later found guilty of being party to a homicide.
In the presentation of the documentary, Avery was clearly innocent in the 1985 conviction, but there are many unanswered questions regarding the 2007 conviction, which we haven't come to yet. Avery knew the dead woman and was scheduled to see her on the day that she died. Her vehicle and the remains of her body were found on his family's lot. However, as he is presented in the documentary, Avery does not seem to be a likely suspect. The case may have been irreversibly compromised by the Manitowoc police, who seemingly planted evidence at the scene and coerced a false confession from Brendan Dassey. The themes within the documentary include the vulnerability of stupid, poor people like Steven Avery and Brendan Dassey, the bias, incompetence and lack of accountability of the police and judiciary, and the sheer mystery of what happened in the case of the dead woman. Figuring out the latter now seems to have become a major national pastime. I've barely looked into it myself, but there is the possibility, for example, that the death was a murder committed by a known serial killer who happened to live nearby at the time but has since died. No doubt this case will be in the news for a long time, and Steven Avery and his nephew may well be acquitted with the help of massive publicity and free legal support.
Most people seem to react to stories like this in the context of inequality and the lack of justice in the U.S. There is a tendency to think in terms of the oppressed and the oppressors without examining the premises of democracy, which is what usually occurs to me. The main things that stand out to me in this documentary are the palpable levels of stupidity. Steven Avery and Brendan Dassey, two incredibly stupid people, are victimized by people who are more intelligent and better educated, but still rather stupid themselves when you consider that the facts are going to catch up with them sooner or later and that at a minimum their reputations will be destroyed. Of course, they're going to get better treatment than Avery, but that isn't really the point. The point is that self-governance is and has always been a pipe dream: if you look closely enough at any democratic system you will always find errors, incompetence, unfairness, self-interest, bias, etc. In this respect the documentary has nothing new to say. It is simply a fact of human nature that all existing democratic systems are bound to coexist with miscarriages of justice.
At the moment I'm not reading much. Next up on my list is the novel Satantango, by László Krasznahorkai, the Hungarian writer, followed by Half-Earth: Our Planet's Fight for Life, by E.O. Wilson, which will be published next month. I haven't read any literature from Central or Eastern Europe for quite some time now and am hoping that Krasznahorkai will make a nice change if he's as good as people say.
We just began to watch the popular Netflix series Making a Murderer. It concerns the legal battles of Steven Avery, whose family owns an auto salvage business in Manitowoc County, Wisconsin. Avery was wrongfully convicted of sexual assault in 1985 and spent 18 years in prison before being exonerated in 2003. Two years after his release, Avery was charged with the murder of a female photographer, and in 2007 he was found guilty and sentenced to life without parole. As far as we've watched, the documentary focuses on Avery and his family and the improper investigations conducted by the Manitowoc police. Avery's nephew, Brendan Dassey, was also implicated in the murder and later found guilty of being party to a homicide.
In the presentation of the documentary, Avery was clearly innocent in the 1985 conviction, but there are many unanswered questions regarding the 2007 conviction, which we haven't come to yet. Avery knew the dead woman and was scheduled to see her on the day that she died. Her vehicle and the remains of her body were found on his family's lot. However, as he is presented in the documentary, Avery does not seem to be a likely suspect. The case may have been irreversibly compromised by the Manitowoc police, who seemingly planted evidence at the scene and coerced a false confession from Brendan Dassey. The themes within the documentary include the vulnerability of stupid, poor people like Steven Avery and Brendan Dassey, the bias, incompetence and lack of accountability of the police and judiciary, and the sheer mystery of what happened in the case of the dead woman. Figuring out the latter now seems to have become a major national pastime. I've barely looked into it myself, but there is the possibility, for example, that the death was a murder committed by a known serial killer who happened to live nearby at the time but has since died. No doubt this case will be in the news for a long time, and Steven Avery and his nephew may well be acquitted with the help of massive publicity and free legal support.
Most people seem to react to stories like this in the context of inequality and the lack of justice in the U.S. There is a tendency to think in terms of the oppressed and the oppressors without examining the premises of democracy, which is what usually occurs to me. The main things that stand out to me in this documentary are the palpable levels of stupidity. Steven Avery and Brendan Dassey, two incredibly stupid people, are victimized by people who are more intelligent and better educated, but still rather stupid themselves when you consider that the facts are going to catch up with them sooner or later and that at a minimum their reputations will be destroyed. Of course, they're going to get better treatment than Avery, but that isn't really the point. The point is that self-governance is and has always been a pipe dream: if you look closely enough at any democratic system you will always find errors, incompetence, unfairness, self-interest, bias, etc. In this respect the documentary has nothing new to say. It is simply a fact of human nature that all existing democratic systems are bound to coexist with miscarriages of justice.
At the moment I'm not reading much. Next up on my list is the novel Satantango, by László Krasznahorkai, the Hungarian writer, followed by Half-Earth: Our Planet's Fight for Life, by E.O. Wilson, which will be published next month. I haven't read any literature from Central or Eastern Europe for quite some time now and am hoping that Krasznahorkai will make a nice change if he's as good as people say.
Friday, February 5, 2016
Diary
I think I've read most of what Hughes has to say on topics that I care about, but will continue reading and let you know if anything else strikes me. For the fishermen among you, Hughes was an avid angler and has a good chapter on that, "A Jerk on One End."
Somewhat reluctantly, I have been following the presidential campaigns here. Bernie Sanders is making his mark, and the Democratic debates have probably been of the highest quality in my lifetime. He has successfully backed Hillary Clinton into a corner, forcing her to sound like the liberal that she never was. In contrast, the Republican debates have been a humorous spectacle. If I had to guess, I'd say that Rubio will get the nomination. Trump's campaign is beginning to implode after he made up a story about Ted Cruz rigging the vote in Iowa. Sooner or later his bloated ego, lies and lack of political experience are going to catch up with him. In my opinion, none of the Republican candidates stand a chance. Whoever gets the Republican nomination will be obliterated in debates with the Democratic nominee simply because they are all either wacky, ignorant or both.
As a Vermonter, I am following Bernie's campaign with some interest. He has injected a much-needed populist fervor into the debate, and he has done well so far, but I don't think economic conditions are severe enough to encourage a sufficient number of people to vote for him. This is fundamentally a conservative country, and it took the Great Depression to make FDR a popular president. Hillary is a professional politician par excellence who, besides being able to parrot the most popular prevailing views at any time, has built up a formidable résumé over the years. Bernie will continue to do well among young, well-educated voters, but that won't be enough to get him the nomination. In any case, he has already made his point by forcing Hillary to adopt anti-Wall-Street language.
As for the Clintons, I'll never like them. I'll always think of Bill as white trash with a high IQ and Hillary as an ambitious know-it-all teacher's pet. They make it embarrassing to be a baby boomer. I remember becoming disillusioned with Bill Clinton when he displayed his pro-business agenda shortly after taking office in 1993. While there may have been some secondary benefits to society that came from their work, the Clintons have been the chief beneficiaries of their political careers. Why, for example, would they move to New York? It certainly wasn't for its cultural benefits or because of family ties. Clinton had friends on Wall Street, Robert Rubin, for instance, who helped fund Hillary's subsequent political career and the Clinton Foundation. It is no surprise that Senator Clinton became markedly pro-Israel and supported the Iraq War. She has learned from Bill how to go with the flow and extricate herself from accusations when tides change, though Bernie Sanders has been calling her out with some success.
One of the reasons why it is hard for me to care about politics is that politicians routinely take credit for things that have nothing to do with them. No matter what they say, presidents typically have very little effect on the economy, particularly while they are in office. As someone who has lived through several economic cycles, it grates on me to hear presidents describe their legacies. A large inflation cycle began in late 1972, and presidents Nixon, Ford and Carter were unable to stop it during their terms in office. However, Carter appointed Paul Volcker Chairman of the Federal Reserve Board in 1979, and that made all the difference. Volcker, who remained Chairman until 1987, raised interest rates radically, killing inflation and gradually setting off an economic boom. A change for the worse occurred when Ronald Reagan replaced Volcker with Alan Greenspan, who served under Reagan and Clinton and held the position until 2006, when Ben Bernanke took over. That didn't stop Reagan and Clinton from claiming credit for reviving the economy, though they had little to do with it. The fact is that Ronald Reagan and Bill Clinton, with the help of Alan Greenspan, set the stage for the Wall Street excesses of the 1980's, 1990's and early 2000's that culminated in the Great Recession. Within the media you are still more likely to hear about the economic foresight of Reagan and Clinton than the final economic collapse that they helped precipitate. History will judge them as economic lightweights and political opportunists.
In my opinion, the job of president of the United States has long been too difficult to be performed by mortals. I don't think "Hope and Change" Obama is going to fare much better than his predecessors in the history books, and while I agree with Bernie Sanders in his promotion of social democracy similar to that of the wealthy countries of Northern Europe, I don't think he is prepared for or has even thought about how a transition to a post-capitalist society ought to be managed.
Somewhat reluctantly, I have been following the presidential campaigns here. Bernie Sanders is making his mark, and the Democratic debates have probably been of the highest quality in my lifetime. He has successfully backed Hillary Clinton into a corner, forcing her to sound like the liberal that she never was. In contrast, the Republican debates have been a humorous spectacle. If I had to guess, I'd say that Rubio will get the nomination. Trump's campaign is beginning to implode after he made up a story about Ted Cruz rigging the vote in Iowa. Sooner or later his bloated ego, lies and lack of political experience are going to catch up with him. In my opinion, none of the Republican candidates stand a chance. Whoever gets the Republican nomination will be obliterated in debates with the Democratic nominee simply because they are all either wacky, ignorant or both.
As a Vermonter, I am following Bernie's campaign with some interest. He has injected a much-needed populist fervor into the debate, and he has done well so far, but I don't think economic conditions are severe enough to encourage a sufficient number of people to vote for him. This is fundamentally a conservative country, and it took the Great Depression to make FDR a popular president. Hillary is a professional politician par excellence who, besides being able to parrot the most popular prevailing views at any time, has built up a formidable résumé over the years. Bernie will continue to do well among young, well-educated voters, but that won't be enough to get him the nomination. In any case, he has already made his point by forcing Hillary to adopt anti-Wall-Street language.
As for the Clintons, I'll never like them. I'll always think of Bill as white trash with a high IQ and Hillary as an ambitious know-it-all teacher's pet. They make it embarrassing to be a baby boomer. I remember becoming disillusioned with Bill Clinton when he displayed his pro-business agenda shortly after taking office in 1993. While there may have been some secondary benefits to society that came from their work, the Clintons have been the chief beneficiaries of their political careers. Why, for example, would they move to New York? It certainly wasn't for its cultural benefits or because of family ties. Clinton had friends on Wall Street, Robert Rubin, for instance, who helped fund Hillary's subsequent political career and the Clinton Foundation. It is no surprise that Senator Clinton became markedly pro-Israel and supported the Iraq War. She has learned from Bill how to go with the flow and extricate herself from accusations when tides change, though Bernie Sanders has been calling her out with some success.
One of the reasons why it is hard for me to care about politics is that politicians routinely take credit for things that have nothing to do with them. No matter what they say, presidents typically have very little effect on the economy, particularly while they are in office. As someone who has lived through several economic cycles, it grates on me to hear presidents describe their legacies. A large inflation cycle began in late 1972, and presidents Nixon, Ford and Carter were unable to stop it during their terms in office. However, Carter appointed Paul Volcker Chairman of the Federal Reserve Board in 1979, and that made all the difference. Volcker, who remained Chairman until 1987, raised interest rates radically, killing inflation and gradually setting off an economic boom. A change for the worse occurred when Ronald Reagan replaced Volcker with Alan Greenspan, who served under Reagan and Clinton and held the position until 2006, when Ben Bernanke took over. That didn't stop Reagan and Clinton from claiming credit for reviving the economy, though they had little to do with it. The fact is that Ronald Reagan and Bill Clinton, with the help of Alan Greenspan, set the stage for the Wall Street excesses of the 1980's, 1990's and early 2000's that culminated in the Great Recession. Within the media you are still more likely to hear about the economic foresight of Reagan and Clinton than the final economic collapse that they helped precipitate. History will judge them as economic lightweights and political opportunists.
In my opinion, the job of president of the United States has long been too difficult to be performed by mortals. I don't think "Hope and Change" Obama is going to fare much better than his predecessors in the history books, and while I agree with Bernie Sanders in his promotion of social democracy similar to that of the wealthy countries of Northern Europe, I don't think he is prepared for or has even thought about how a transition to a post-capitalist society ought to be managed.
Tuesday, February 2, 2016
Art School
I've been jumping around in the Hughes book and have found the reading good. Some sections were written in about 1980 and some were written as late as 2011, making the perspective a little confusing at times. So far I've read about Andy Warhol, Jackson Pollock, Edward Hopper and low ethical standards among the artists, dealers, critics and museum curators of New York City. I'm not very familiar with recent painting, but agree with Hughes that Pollock and Hopper were the most original American painters of the twentieth century. It took a long time for them to grow on me, but I now actually like Hopper and generally consider Pollock better than his abstract successors. These two now seem to be part of the American art "canon." Sometimes Hughes sounds as if he has a score to settle, though I can't blame him considering the environment in which he worked. He genuinely loved art and had a strong background in European art history, and it is easy to see how he could have become frazzled working in Manhattan; "philistine" is one of his favorite words. He came to the U.S. from Australia via Italy and England, giving him the kind of outsider perspective so essential to good criticism yet so often lacking in the arts here.
Art is a murky area for discussion, and complete objectivity regarding the value of an art object is probably impossible to establish. Art, literature and "genius" do not exist beyond the parameters of locally defined cultural references, making their appraisal contentious, especially over short periods of time. For example, the paintings of Vermeer, who is now recognized as one of the greatest painters ever, were almost unknown until the late nineteenth century. The Lacemaker was sold for seven pounds in 1813, and it was sold again to the Louvre in 1870 for a mere fifty-one pounds. To his credit, Hughes focuses on aesthetic merit as best he can, which automatically puts him at odds with the careerists within the field.
The passage that I've found most interesting up to now concerns the undesirable consequences of the teaching methods employed by art schools and universities. Happily, this dovetails with what I've been writing about the ill effects of creative writing programs:
For nearly a quarter of a century, late-modernist art teaching (especially in America) has increasingly succumbed to the fiction that the values of the so-called academy – meaning, in essence, the transmission of disciplined skills based on drawing from the live model and the natural motif – were hostile to "creativity."
This fiction enabled Americans to ignore the inconvenient fact that virtually all artists who created and extended the modernist enterprise between 1890 and 1950, Beckmann no less than Picasso, Miro, and de Kooning as well as Degas or Matisse, were formed by the atelier system and could no more have done without the particular skills it inculcated than an aircraft can fly without an airstrip. The philosophical beauty of Mondrian's squares and grids begins with the empirical beauty of his apple trees. Whereas thanks to America's tedious obsession with the therapeutic, its art schools in the 1960s and 1970s tended to become crèches, whose aim was less to transmit the difficult skills of painting and sculpture than to produce "fulfilled" personalities. At this no one could fail. Besides, it was easier on the teachers if they left their students to do their own thing, and not teach – especially since so many of them could not draw, either. A few schools, such as the Pennsylvania Academy of Fine Arts, held out against this and tried to give their students a solid grounding. But they were few.
Other factors contributed to the decay of the fine-arts tradition in American schools in the sixties and seventies. One was the increased attachment of art teaching to universities, which meant that theory tended to be raised above practice and making. Thinking deep thoughts about histories and strategies was more noble than handwork, and it produced an exaggerated drift toward the conceptual. This interlocked in a peculiarly damaging way with reliance on reproduction of works of art instead of direct contact with the originals....
In the slide or reproduction, no work of art appears in its true size or with its vital qualities of texture, color, and the recorded movement of the shaping hand intact. A Klee, a Pollock, or a lunette of the Sistine Chapel – all undergo the same abstraction, the same loss of presence. Impartially, they lose one of the essential factors of aesthetic experience, the size of the artwork relative to our sense of our own bodies: its scale....
A slide gives you the subject, the nominal image of a work, without conveying a true idea of its pictorial essence. You cannot think and feel your way back into the way something was made by looking at a slide: only by studying the real thing. And no tradition of making can be transmitted without such empathy. Did this foster the dull blatancy of so much recent American painting, all impact and no resonance? Have the falsifications of the reproduced image fed back into the new originals, cutting out those whose very qualities which, by their nature, cannot survive reproduction – subtleties of drawing, touch, and brushwork, of color and tone, that slow up the eye and encourage, beyond the quick look, a slow absorption?
If, as Hughes argues, American teaching methods in the arts had a negative impact on the quality of contemporary painting and sculpture circa 1990, mutatis mutandis, a similar pattern may be at work in creative writing programs with respect to the quality of contemporary American literature. Recent history points to a reduction in aesthetic richness and depth when training in the arts becomes captive to academic pedagogues.
Art is a murky area for discussion, and complete objectivity regarding the value of an art object is probably impossible to establish. Art, literature and "genius" do not exist beyond the parameters of locally defined cultural references, making their appraisal contentious, especially over short periods of time. For example, the paintings of Vermeer, who is now recognized as one of the greatest painters ever, were almost unknown until the late nineteenth century. The Lacemaker was sold for seven pounds in 1813, and it was sold again to the Louvre in 1870 for a mere fifty-one pounds. To his credit, Hughes focuses on aesthetic merit as best he can, which automatically puts him at odds with the careerists within the field.
The passage that I've found most interesting up to now concerns the undesirable consequences of the teaching methods employed by art schools and universities. Happily, this dovetails with what I've been writing about the ill effects of creative writing programs:
For nearly a quarter of a century, late-modernist art teaching (especially in America) has increasingly succumbed to the fiction that the values of the so-called academy – meaning, in essence, the transmission of disciplined skills based on drawing from the live model and the natural motif – were hostile to "creativity."
This fiction enabled Americans to ignore the inconvenient fact that virtually all artists who created and extended the modernist enterprise between 1890 and 1950, Beckmann no less than Picasso, Miro, and de Kooning as well as Degas or Matisse, were formed by the atelier system and could no more have done without the particular skills it inculcated than an aircraft can fly without an airstrip. The philosophical beauty of Mondrian's squares and grids begins with the empirical beauty of his apple trees. Whereas thanks to America's tedious obsession with the therapeutic, its art schools in the 1960s and 1970s tended to become crèches, whose aim was less to transmit the difficult skills of painting and sculpture than to produce "fulfilled" personalities. At this no one could fail. Besides, it was easier on the teachers if they left their students to do their own thing, and not teach – especially since so many of them could not draw, either. A few schools, such as the Pennsylvania Academy of Fine Arts, held out against this and tried to give their students a solid grounding. But they were few.
Other factors contributed to the decay of the fine-arts tradition in American schools in the sixties and seventies. One was the increased attachment of art teaching to universities, which meant that theory tended to be raised above practice and making. Thinking deep thoughts about histories and strategies was more noble than handwork, and it produced an exaggerated drift toward the conceptual. This interlocked in a peculiarly damaging way with reliance on reproduction of works of art instead of direct contact with the originals....
In the slide or reproduction, no work of art appears in its true size or with its vital qualities of texture, color, and the recorded movement of the shaping hand intact. A Klee, a Pollock, or a lunette of the Sistine Chapel – all undergo the same abstraction, the same loss of presence. Impartially, they lose one of the essential factors of aesthetic experience, the size of the artwork relative to our sense of our own bodies: its scale....
A slide gives you the subject, the nominal image of a work, without conveying a true idea of its pictorial essence. You cannot think and feel your way back into the way something was made by looking at a slide: only by studying the real thing. And no tradition of making can be transmitted without such empathy. Did this foster the dull blatancy of so much recent American painting, all impact and no resonance? Have the falsifications of the reproduced image fed back into the new originals, cutting out those whose very qualities which, by their nature, cannot survive reproduction – subtleties of drawing, touch, and brushwork, of color and tone, that slow up the eye and encourage, beyond the quick look, a slow absorption?
If, as Hughes argues, American teaching methods in the arts had a negative impact on the quality of contemporary painting and sculpture circa 1990, mutatis mutandis, a similar pattern may be at work in creative writing programs with respect to the quality of contemporary American literature. Recent history points to a reduction in aesthetic richness and depth when training in the arts becomes captive to academic pedagogues.
Subscribe to:
Posts (Atom)