Saturday, April 29, 2006

If the Mouse Were King . . .

A "brief encounter" with an ex-pat American living comfortably in Sweden has led the Mouse to wonder if the ideals envisioned by the founders of the American nation were not pipe dreams. Perhaps the notion was merely cockeyed that people could wisely govern themselves. We are, after all, a people "endowed by our creator" with certain "inalienable" proclivities, among them the desire to believe whatever strikes our fancy, the desire to deny any truth we find uncomfortable, the desire to love and to hate with equanimity, the desire to -- shall we say it -- to destroy any disagreeable parts of the "alien" world. We are, in the aggregate, a fairly stupid bunch.

But then, so were they, those whose tyrannies the immigrants to this fair land sought to escape. And so are they, the foreigners who threaten us with their own versions of stupidity. We know enough, and knew enough, to resist oppression and to defend ourselves against the threats and actions of madmen. But it is, as the Buddha knew, one thing to resist force and quite another to accomodate it and react to it with the strength of calmness. It is, as Socrates knew, one thing to avoid the wrong and quite another to seek the right. We know evil when we see it but are denied knowledge of the just.

That much The Founders knew. They did not prescribe rules for human behavior. They created a governmental framework which they believed would be the best of any they may have conceived by which people could be governed. They trusted that, left to their own designs, the people would elect representatives fully capable of constructing laws optimally beneficial to themselves and their posterity.

But even in that beneficent vision, The Founders did not imagine that the people themselves were capable of direct self government. They believed that, as a minimum, the people were able to elect the best among themselves to represent them in governing, and that, even if the people occasionally elected poor governers, on balance a government of and by and for the people would be better than any other. The people would have no one to blame but themselves if those they empowered to act on their behalf governed unwisely. The Founders believed that people could at least be trusted to know the difference between the noble and the base, even if they did not know the means by which they knew that difference.

The Founders may have been vaguely aware of the psychological motivations of human behavior, but they could not possibly have understood the extent to which those drives could be manipulated by clever men. They could not have understood that some men would sell their votes for unconscious reasons. They could not have appreciated that some men, cleverer by half than the masses, would find ways to play with the minds of men as puppeteers play with their marionettes. The Founders assumed men could see the difference between their prejudices and their reasoned ideas. To that extent, The Founders themselves were ignorant.

Those cleverer men did and do, however, understand the malleability of human minds. They understand, as Plato did, that men could be more easily led by rhetorical persuasions than by reason, that even lies play well if they feel good and the truth does not. Esoteric knowledge of the electorate was at first used by political parties to sway elections, but in time, as the techniques of subtle persuasion became more sophisticated, the candidates themselves became symbols more valued for the feelings they created by their mere appearance than for their ability to actually govern. The need thus arose, in the minds of the cleverer men, for meta-puppeteers to handle the strings of the electable puppeteers.

The father of this innovative view was a university professor by the name of Leo Strauss. Himself a gentle man, Strauss brought his students around to his views by the same sorts of techniques he was teaching them to employ in their future roles as governors of the governers. He spoke in great detail of Machiavelli's devious techniques, sounding on the one hand like a critic and on another as a disciple. That every message delivered could have both an exoteric and an esoteric meaning was "caught onto" by those who were to become Straussians. The others were to be left behind, no more or less than mere voters, the grist to be ground in the mills of cleverer minds.

There's much on the internet that can be read about Leo Strauss and his "graduates." I will briefly mention one, and then leave it alone.

Francis Fukuyama, in 1992, wrote a best selling book of its kind called, The End of History and the Last Man. Hailed as the final statement of Hegelian history, the book immediately "caught on" with a certain class of political warrior. It could be read as a justification for the export of the American dream, an idea that fit neatly into the needs of a struggling American manufactory for a politically expedient way to peddle its goods to the electorate. A mere eight years later, the idea that the world, and not just America, was ready for a "Bill of Rights" found its niche in the American mind. Unfortunately -- and Fukuyama was one of the first to see this -- the gentle persuaions of the Straussian dream were soon to be replaced by the forceful hardware of the military-industrial juggernaut, the idea being that democracy (or "freedom") is such a good idea any means needed for its export are justifiable.

And here we are.

Fukuyama quickly denounced the uses to which his theme had been placed. His apologies have achieved nowhere near the notoriety of his foundational book. He is now a footnote.

Friday, April 28, 2006

A Sort of Mendacious Conversation

I was talking to God the other day, just having a quiet little conversation. I talk with God a lot, mostly about my problems. He doesn't mind, so long as I try to keep it interesting. Some­times he even calls me up just so he'll have someone to talk with. We get to know one another better that way, talking the way we do.

You take the other day: I was feeling pretty much down. My dog had died and I had a toothache. When I went out in the field to dig a grave for Jeff - the tooth pounding - I looked up and hol­lered at God. "Hey!" I shouted. "This is no way to treat a friend!" He looked up from his workbench and winked. "Would you rather the dog be digging a grave for you?" At first, I thought that was the cruelest thing I'd ever heard. It seemed God had no care for my beloved Jeff. But then I got to thinking about it. Jeff and I both have got to die sometime, and I could just see old Jeff there, maybe with a toothache himself, scratching out a hole for me in the ground. He would cry his eyes out. I know he would. That little guy's heart would be broken because I know, you see, Jeff loved me more than I loved him. You could tell by the way he acted. I remember the time he accident­al­ly bit my thumb. Oh, what a look of shame came upon that beau­t­i­ful little face! "What have I done," he was say­ing. "What in the name of heaven have I done, biting the hand of the person I love most in the world." And I knew, you see, just thinking about how Jeff felt, I knew what God meant. He meant Jeff dying instead of me, in just that order, was the right order. Any other way and the amount of pain in the world would have been greater.

I understood all that just from that one little exchange with God. That's what I meant when I said talking to God helps clear things up.

This thing with Jeff dying brings up something I hadn't thought of until now. You hear people talk a lot about provi­dence, particu­larly divine providence. Mostly, the talk is crit­ical of the whole notion. People these days don't much believe in it. But when you sniff around that talk I had with God about Jeff's death, you can't help but catch a whiff of divine provi­dence. Somebody minimizing the amount of pain in the world sounds like something that wouldn't happen the way it did without some higher power figuring it all out in advance, weighing one order of our dying against the other, and deciding to do it the way it happened. But that kind of thinking is what has led so many people to doubt the hand of providence. It's hard for people brought up on flesh-and-blood logical reasoning to put any real faith in something as far fetched as a divine power in the sky arranging matters on a little fly speck of a planet whirling around one of the billions of stars in a remote corner of one of the billions of galaxies. Belief in that sort of thing has pretty much gone South.

Maybe we've been looking at it wrong. Maybe we've been thinking too literally about the notion of providence. Take a look at what God actually said to me (after I'd chewed him out about killing my dog). God didn't say anything about balancing out the pain in the world. He only asked me if I'd be more satisfied having Jeff bury me instead of the other way around. It was me who came up with the part about minimizing pain. Maybe the way God has set it up includes some sort of a built in providence, something no one would notice at a glance but which begins to make sense if you look long enough. The idea that Jeff's dying before me had some­how been for the best didn't occur to some airy-fairy, vaporous ghost. It occurred to me; it happened in my mind - and my mind, however it came to be what it is, is a part of the whole thing most people give God credit for creating.

Of course, the bit about "creating" is probably just as hard for modern people to accept as the notion of divine providence, so I don't want to get caught up in that one. I'm just talking here about what happened when I got angry with God about my dog dying, and about the thought that occurred to me after God put his spin on the situation. And as you can see (from the trouble I'm having putting it into words) I'm not altogether clear about everything that's wrapped up in that thought. But here's my first real at­tempt to make sense of it. It's a kind of analogy.

Imagine a master sculptor who's trying to instruct a bunch of wannabees who never sculpted anything in their lives. The master - because he is a genuine master - decides to forget about teaching the rules of sculpting. He has figured it out for himself that his notions about what makes a piece of sculpture good are limited by his ideas and his experience as a sculp­tor. So instead of telling his apprentices what good sculpture ought to look like, he just throws each of the them a hunk of clay or stone or wood or whatever and then just walks off and leaves the novices to figure the rest out for them­selves. After that, whatever the students do with the materials is going to be exactly what the master intended, because what he intended for them to do was whatever they learned to do. In other words, whatever happened could be said to be "divine providence."

Now I realize this doesn't jibe with what a lot of people think of as divine providence, but as I said, a lot of people are having a terrible time coming to grips with the orthodox notion. In offer­ing an alternative, I'm doing nothing but trying to hold onto some form of belief in providence so I'll have a way to explain an even bigger mystery (which maybe I'll get to later, soon as my cat dies). Those who still think of providence in the conventional way would have the master sculptor taking a look at every piece of claptrap the students make, putting a stamp of approval on some and discarding others. They think for divine providence to have any real meaning, God has to be involved in the details of the world, like making day-to-day judgments about what's good sculp­ture and what's not, what's right and what's wrong. But that idea of providence has always run into severe difficulty because the same people who hold to that belief also believe that God is permanently and completely benevolent, that is, that he's a good guy. But you don't have to look too hard to see that some of the things God would have "discarded" don't resemble the actions of a good guy. My dog, for example, was only 14 months old and never did any real harm in the world, to himself or anyone else. (My thumb stopped hurting in less than a minute.)

It's not that clever thinkers can't come up with explana­tions for the apparent contradictions between abso­lute divine providence and absolute benevolence. It's just that some of the clever twists they've had to put on things in order to resolve the problems have made keeping the faith awfully difficult.

By at­tributing real di­vine power to God, by saying right up front that it was his idea all along to let his apprentices work out their own solutions to the problems they face in trying to sculpt a livable world, God exhibited a lot more sense than he would have by creating a world that required His full attention all the time.

Of course, doing it this way, letting the pupils learn for them­selves, you're bound to get a lot of bad art. But also, when you finally get something worth keeping, there won't be any doubt about the goodness of it. You won't have to wonder whether God approves, or even if you're asking the right God. He'll approve because all he ever wanted was for the sculptor who made the thing to be satisfied. And it won't really matter if you've got the right God tuned in. The only thing he's going to say anyhow is, "Attaboy! Stay with it!" just like any real Master worthy of the name would do.

That works for me. I may not be doing too good at making a world that works, but at least I know not to blame God. Two bits says that if God were judge and jury of every human act, the world would be littered even more than it is with the remains of reject­ed art. Then it would matter that we identify the right God. Some Gods may have different ideas about what makes a world good.

Of course, human beings might disagree about that, too. But maybe we can eventually get rid of the notion that the rules for sculpture are divinely inspired. When we do, we'll find it easier to change our minds. Maybe the only rule is that we do our best at sculpting.

Just as I completed that thought, I laid little Jeff in his grave. He looked up at me for the last time. I know he didn't actually smile, but I thought he did.

Thursday, April 27, 2006

The "Rational" Mouse

In this month's edition of Mother Jones magazine, John Kenneth Galbraith -- an alleged economist -- describes the underpinnings of liberal capitalism in several dozen well-chosen (but unoriginal) words: "...[I]f people are rational, if markets can be 'contested,' if memory is good and information adequate, then firms will adhere on their own to norms of honorable conduct." He adds that, "Any public [by which he means 'government'] presence in the economy undermines this. Even insurance . . . is perverse, for it encourages irresponsible risk-taking." I kept reading, expecting Galbraith to make something of the not-so-well-hidden paradox embedded in his description. He didn't. Instead he peppered us with the same old hackneyed shibboleths the uninspired left has been using for at least a half century, couched in the shapes of modern villains -- New Orleans, Ken Lay, Halliburton -- the same old them against us world.

Ever since Hegel and Marx saddled the minds of humanity with notions of eternal class warfare, the "masters" of money and the "slaves" of penury have been at each others' throats demanding "liberty" on the one hand and "justice" on the other. Neither seems to have appreciated the fact that in the face of nature, masters and slaves are all alike. We're alone here together, struggling to make it as a species.

In lieu of the grey bearded complaints he made in the article, Galbraith might have sought to put a face on that word he used, information. I'm sure he meant what all alleged economists always mean, market data -- prices, costs, demand trends -- the sorts of info day-traders and CEOs sweat BBs over every morning, noon, and night. Certainly an old and experienced warrior like Mr. Galbraith should have seen that market-smoothing strategies like insurance (whether private or public) are expressions of that very rationality the laissez faire system assumes. Wise entrepreneurs seek to insure themselves against uncontrollable events, just as a wise middle class seeks to do what it can through government action to minimize its risks. And surely he should have admitted -- at least to himself -- that the world is ineluctably rational, that nothng ever happens that cannot be explained by a rational appeal to causes and effects.

The problem is not that men and markets are not rational. They cannot be otherwise. It is that men and markets do not behave in the best interests of the species. They behave as the nature of men and markets demand.

Perhaps this challenges one notion of what the word "rational" means. It is sometimes taken to mean "sane," and "sane" is sometimes taken to mean normal. Thus it is sane and normal for men and markets to seek to perpetuate themselves in being. Men, when referring to themselves, interpret this as surviving and improving. When applied to markets, "sane" means self-correcting, seeking balance, and the implication persists that sane markets and human survival operate in parallel -- what's good for the market is good for humanity.

But of course some markets are good only for sellers and are horrible for buyers, though the buyers may not know it. Tobacco, heroin, daytime television, and SUVs come to mind. The information relating to the usefulness of the products bought and sold in these destructive markets may or may not be available to the buyers, and the sellers may see only that the commodity they are trading for (money) has more usefulness to them than the commodity they're selling. Few would disagree that the sellers are entirely rational (though they may not be legal) -- they seek and satisfy their self interest. But even fewer would agree that the buyers are also acting rationally. There is, for their decision to buy, a clear and comprehensible reason. Their minds have determined them to act, and their minds are, in all cases, rational. To doubt their sanity would be to doubt the rationality of causes and effects.

But obviously, the buyers and sellers of stupid stuff are equally as stupid as the stuff of their trade. They just don't know they're stupid. To see the stupidty of stupid markets a person would have to be something other than purely rational. A person would have to be wise.

So, what does that mean? What's to be wise?

Well, to put it as bluntly as it ought to be put, if being wise means anything other than to transcend the emotions of the moment, and to act on the basis of reasoned motives, then the definition of "wise" must be reserved for those wiser than a Mouse. In Mouse talk, wisdom is nothing more or less than a description of the condition those people find themselves in who see a strong connection between the things they do and the things that happen to them. Wisdom does not consist in always doing the right thing. We cannot know or predict the future with anything more than a mere probability. But to act in a decisive way on significant matters, without having given consideration to the consequences of our actions on ourselves, our family, our nation, and our species, is to act unwisely.

Well, that conclusion may seem no less hackneyed than Galbraith's characterization of the world as a dog-eat-dog place. But there is a difference. To create the notion of a class struggle is to create a substratum of reason itself, a piece of information that may persist as an unexamined cause of action. We may passively react on the belief without being aware of its influence on our action. But the advice to the wise, that they should do their best to think things out before acting, constitutes a cause of a different sort. It is a challenge to the passive mind to stop being led by emotions and unexamined premises, to instead get actively involved in day-to-day behavior -- that is, to think.

It's one thing to not possess sufficient information, and quite another to be ignorant of the need for information. But that statement notwithstanding, no way exists by which the human mind can be assured that it is informationally fully equipped for action. There is no such thing as a fool proof system for human living. Even a wonderful idea like democracy can be turned into a nightmare by unwise elected offcials and their advisors. We do the best we can with the goods at our disposal, and if we have made mistakes, we had best be sure that we confess, at least to ourselves, that we know wherein we have erred.

Back when I was a railroader, we had a rubber stamp made up that said, in a beautifully apologetic, and yet declarative sentence: "I beg to advise I was in error and will strive to do better in the future." That's one of the weaknesses of democracy: those politicians who admit their mistakes are likely to be voted out of office. But then, that's a weakness of the electorate: they don't know how the human mind works. They must think poiliticians are more perfect than themselves.

Which is, of course, pure horse shit.

Tuesday, April 25, 2006

The Mouse on "Meanings"

When we excuse an action we are admitting that the action was wrong. We justify an action when we believe it was the right thing to do. We have what may be called the Torquemada Solution by which we excuse (but do not justify) our killings. Temporarily suspending reason, we can imagine that Torquemada, the most effective of the Spanish Grand Inquisitors, acted out of the belief that his victims and their heresies were preventing the salvation of humankind. Once we understand “salvation” the way Torquemada understood it – as deliverance from eternal torment in a lake of fire – we easily see that he could have justified to himself far greater atrocities than those he actually committed. To Torquemada, not this life, but some other he imagined as more desirable, was the ultimate value. We can thus understand some of our disregard for human life as natural effects of the gravest sort of error.

We justify war on the basis of what we perceive as “injustice beyond the ordinary.” Sometimes we’re right. Given its result, we can justify the American Civil War, though the abolition of slavery was not that war’s original intent. And even though we were not fully aware of Nazi Germany’s atrocities when we decided to enter the Second Great War, if we had known of them we should have been compelled to take up arms. Others of our wars were not so justifiable as those, and if we had known the truth of the reasons behind them, we should not have involved ourselves in them.

Perhaps, though, we ask the impossible if we demand that all the people should be informed of all the facts before deciding significant issues. We are after all a republic in which elected representatives make decisions of the war-or-peace sort for us. Most of us have neither the time nor the intellect to devote to an assessment of the relevant issues. But even if time were afforded us, and even if all of us were reasonably educated concerning foreign affairs, our ability to determine the truth of the matter would still be impaired by the fact that we are often compelled by unconscious forces to believe the lie and doubt the truth. Difficulties far greater than those facing Pavlov’s dog face those whose “bells” are not mere bells but are plausible theories indelibly imprinted upon their minds. We love our freedom, and would be willing to die for it, but let the word “freedom” be attached to broad expanses of our neuronal territory, let it be emotionally interleaved with all our conceptions – not merely the most fundamental – and we will find ourselves dying for causes that have little or nothing to do with actual freedom, ours or anyone else’s.

When, for example, the word is repeated to us like a meditator’s mantra to justify the ambitions of a deluded politician, it is only by an almost superhuman effort that we ask whether an Iraqi would, to obtain his freedom, be willing to be killed by a foreigner who may be driven as much by a need for the approbation of his constituents (or contributors) as by a genuine care for the Iraqi, his wife, their sisters, brothers, and children – those the foreigner must slaughter in order to obtain the Iraqi’s freedom for him. And even if we were to ask ourselves that question, perhaps the word “freedom” will have been so positively charged by our own history, that we would answer for the Iraqi – who had no say in the matter – that he would surely welcome death if only his heirs could be assured a portion of that blessed freedom. It would perhaps never occur to us to wonder if the word “freedom” means to the Iraqi what it means to us.

To people unfamiliar with the Arab culture, that last sentence may seem only a rhetorical conjecture. It may seem that even if the Iraqi has a notion of freedom different from ours, his must certainly be false. Those so deluded will perhaps never have understood that all words – all but a logical few – have gotten their meanings out of human experience. T. E. Lawrence’s words provide a taste of the meaning of “freedom” as it might feel in the Arab mind, a feeling I can understand but do not share.

We had ridden far out over the rolling plains of North Syria to a ruin of the Roman period which the Arabs believed was made by a prince of the border as a desert-palace for his queen. The clay of its building was said to have been kneaded for greater richness, not with water, but with the precious essential oils of flowers. My guides, sniffing the air like dogs, led me from crumbling room to room saying ‘This is jessamine, this violet, this rose.’

But at last Dahoun drew me: ’Come and smell the very sweetest scent of all’ – and we went into the main lodging, to the gaping window sockets of its eastern face, and there drank with open mouths of the effortless, empty, eddyless wind of the desert….

‘This,’ they told me, ‘is the best….’
“Seven Pillars of Wisdom.”

I have seen a lot of desert country, mostly in Arizona, but have experienced nothing like the exhilaration Lawrence was describing. To me the desert is a hot place. In no way does it figure into my idea of freedom. But just as I might include aspects of “cowboy life” in the feeling that goes along with my conception of freedom, so must the desert involve itself in the experiential life of the Arab. The author from whose book I grabbed the Lawrence quote [Robert Lacey, The Kingdom] spoke with great understanding of the Bedouin’s nomadic life, particularly of the ghazzu, which he briefly defines as “the raid,” but which another Englishman defined as “a cross between Arthurian chivalry and County Cricket.” The idea was to steal another tribe’s camels, but in doing so, one had to obey the rules of the game: no molesting women and no raids between certain hours of the night. If your ghazzu failed and you were captured, the rules also required that you be fed well, and then turned loose, but your “team’s” camels and all but one firearm were confiscated. The “trudging back to camp after an unsuccessful raid was, apparently, a part of the game.”

Perhaps in the Bedouin mind freedom feels something like being out on a raid, knowing that even if you fail, the rules of the game will be followed, something like the feeling I get when I think of home.

Monday, April 24, 2006

How the Mouse Met Jesus (A True Story)

The boy worked the night trick in the L&N's Choctaw Yard, a lonely place on the Mobile waterfront where box cars awaited their ship. For this bright young man, keeping accurate records of the comings and goings of the "exports" left him virtually unemployed. He was, for the better part of every night, bored or asleep, alone on the waterfront with box cars for dreams.

But on one of those boring nights, just after the start of his trick, not in a dream, the boy found a small black dog, apparently sick to its death, lying near the switch stand between tracks four and five. The sight of the dog frightened the boy. All his life he had been told that a sick dog was dangerous. It might be rabid or driven by pain to viciousness. Certainly it was to be avoided, so all night when the boy's work brought him into the vicinity of the dying dog, he found reason to veer away. But this boy -- the "genius" -- felt a greater and deeper fear than his fear of the dog. Death was near, less than 30 yards away, slowly creeping into the body of the dying dog. The boy feared invisible strangeness more than he feared the palpable threat of a sick dog -- a feeling unaccountable in one so young.

Near four in the morning death completed its work. The dog ceased all movement. The froth on its mouth hardened into a dry caulk. Its blind eyes, like chalkie marbles, reflected neither light nor life. The animal was dead, no longer a threat.

But just as the boy finished his nightly chores, just as the light of day reached the chocked-up caboose that served as an office, a man appeared at the door, propped one foot on the stoop, and called to the boy. "I have a sick dog here. Could you come out? We may be able to do something for him."

The boy stepped outside. The man stood straight and looked straight into the boy's eyes. He seemed okay, not just another hobo. He was dressed in neat khakis and a blue checkered Eisenhower jacket, and was wearing a peaked cap of the sort worn by baseball players. But the boy paid only brief attention to the man's clothes, for in his outstretched hands the man held the limp and lifeless body of the dead dog, its eyes frozen open, arid as sand and seeing nothing.

"I found him over by the rails," the man said. "He seems to have been there for some time. Did you see him earlier?"

The boy almost lied; he didn't want to admit he'd seen the dog and done nothing. Maybe the soft blue of the man's eyes created the guilt.

"Yes, I saw him," the boy answered, pleased with himself for telling the truth. "I thought he was dead. I was afraid to go near him."

"Well, I can understand that. It's better to play it safe around strange dogs with unknown illnesses."

"You don't seem to be afraid," the boy said.

"Well, you see," holding the dog higher, "he's really harmless."

The man seemed not to notice that his reply made no sense. He knelt and gently laid the dog's lifeless body across the cinders, carefully resting its head on a tuft of grass.

"I think the little fellow's going to be alright," he said.

"I don't know," the boy whispered, almost as though afraid to contradict the man's quiet confidence. "He hasn't moved all night. He looks pretty bad to me."

The man looked up into the boy's eyes, his hands still caressing the dogs fur. "Yes, I suppose you're right," he said. "He does look bad. But then you look okay to me and perhaps I look alright to you, but who knows what the next moment holds for either of us."

Again, the man's words confused the boy. They didn't seem a proper response at all. The boy understood the words in their normal sense, the way his mother might have intended them, but the man seemed to intend the words in a personal, private way, as if the future were not the thing of mere happenstance implied by the trite saying, but a concluded arrangement holding for the boy a determined and certain unfolding.

But the boy could not bring himself to grasp that special meaning. How could he? The man was kneeling beside a dead dog and treating it as though nothing were wrong with it. The boy did not want to say anything the man might find disrespectful, but he had to say something. The words came out slowly. "My future seems a little brighter than that dog's."

In a stronger voice, he asked the man where he had come from. "I haven't seen you around here before."

"I spent the night here in your rail yard. And the night before, and the night before that one."

The man's trousers were trimly creased. "You don't look like you spent the night in a box car." The boy knew every car in the yard was loaded and sealed. He had been writing the seal numbers in a report when the man arrived.

As though the question of his appearance were not worth considering, the man turned again to the dog. Looking first to the boy, then to the dog, then back again to the boy, he quietly uttered words the boy has not forgotten as the years have passed.

"You think this animal is dead because he seems dead. You think yourself alive because you move about and do things only a person alive is able to do. But if the dog, though dead, is alive, if he is in fact healthy and robust, could it be that you, though alive, are dead?"

The effect would have been the same if the man had slapped the boy's face. No one, much less a stranger, had ever spoken such serious words to him. It wasn't so much the vague familiarity of what he said, though it was that, too. It was that he spoke so confidently, without a trace of concern. He seemed not anxious at all that the boy might doubt him. The man seemed sure the boy in the rail yard would feel nothing but what he intended him to feel.

But the boy, feeling too much of everything to feel anything, lowered his eyes and leaned back against the caboose. The man tilted his head to one side and dropped his chin slightly as though to peer upward into the boy's face. Sensing an unspoken question, the boy shook his head, denying the essence of the moment.

The man smiled. He stroked the dog's fur again in what seemed a final gesture, then rose from his kneeling place beside the dog and put his hand on the boy's shoulder. "Why don't you go in and call someone for the little guy. Perhaps that's all there is to do for him, cart him off."

For a moment the boy stared into the kindest and, yet, the most accusing eyes he had ever seen. As he gazed, all his knowledge suddenly left him. Before, he had been as truly certain of nearly everything as the brilliant and nineteen can be, but in that waking moment, as the man's eyes convicted him of sins for which names had not been invented, the boy knew that, all along, he had been ignorant of everything including his own ignorance.

Afraid to ask the questions boiling inside of him, unable to make sense of their sounds, the boy lowered his eyes and numbly muttered: "I'll call Charley Watson, the humane officer. It's early, but he'll come."

The boy managed to take his eyes off the man and went inside to make the call. The number, along with fifteen or twenty others, was written on a card tacked to the wall. The boy needed only a moment to learn that Watson's phone was busy.

When I walked again into the sunlight, the man was gone, and the little black dog was at my feet, leaping joyfully, licking my hand, healthy and robust.

Sunday, April 23, 2006

A "Mendacious" Friend

I was reminded yesterday of my late friend, Robert Van Kluyve. Milady was home from an eye operation and needed a supply of absolutely sterile water. I’m great at boiling water. Later, trying to bring a smile to her face, I reminded her of my artistic skills in concocting the pot of “gourmet water” for her, showed her how you can’t get it right unless you bend your fingers around the stove knob just so, to give the process a touch of impressionist magic. She did smile and added that my description of my talent reminded her of Bob’s trick with the pitcher . . . and those few words ignited memories I am glad are still there to be recalled.

Bob was a retired university professor of the classics, a master potter, a master calligrapher, a musician, and a one-time navy pilot who, after leaving the navy, operated his own airline flying rich folks and their mail back and forth from the mainland to Block Island. But most of all, Bob was a story teller, and by “story” I mean what my mother meant when she continuously reminded me to “stop telling stories.” Bob could take an absolutely unbelievable situation and work it into a captivating yarn so perfectly and enthusiastically related you would have sworn it happened just as Bob said it did. Like when as a young man (he being an Eagle Scout) he put on his scuba gear and dove into the ketchup vat at the Heinz factory to retrieve a large monkey wrench that had been accidentally dropped in. You could feel your legs tensing up as he detailed the size and weight of the wrench, so heavy he could hardly lift it, one of those really, really big ones that he had found almost impossible to bring to the surface, but finally managed it after returning to the surface for a “stout rope” that he then tied around the wrench. then organizing a team of strong men to hoist the wrench to the surface, him along with it. “I held on to that wrench so my girl friend could see me as I emerged victorious from the ketchup." He claimed he could taste ketchup for a month afterwards, the stuff having apparently seeped through his skin.

Maybe that adventure would explain where Bob got the idea to dramatize for his classes Oedipus Rex’s pulling out of his eyeballs. Bob claimed he had palmed two fists full of those little plastic containers of ketchup they hand out at fast food joints. Placing his hands over his eyes, he screamed in pain and squeezed, "causing Oedipus’ blood to spew forth in great gushes of agony.” I heard him tell that tale at least five times to different audiences, and he always ended it by declaring the lesson “a failure unless at least one of the fair young sophomores fainted.”

He also flew a jet plane through the arch at St. Louis, under the Golden Gate bridge, and twice crash landed in farmland, “harvesting,” he said, “half the crop.” He had been everywhere, done almost everything, shook hands with three different presidents, and, “more importantly,” with the CEO of General Electric. He flew actors and politicians to Block Island, sometimes in “typhoons,” and once managed to turn Jimmy Cagney into a sniveling wimp when he “nose-dived” the two engine prop job to within 200 feet of the earth before “pulling her out.” “I volunteered to pay Cagney’s laundry bill but he was still in shock. So I got away Scot free.”

But as I say, Bob was also a genuine master of several arts. When I knew him, he was the proprietor and resident genius of “The Swinging Bridge Pottery,” named for the suspension foot bridge swung across the Robinson River in front of the pottery, a quarter-mile upriver from my place. He did a large wholesale business in stoneware garden markers, each decorated with the name and a silk-screened picture of a different herb. But it wasn’t the money that kept him devoted to pottery – and he definitely was devoted. The art pieces he designed and crafted were responsible for that. And, to shorten this story a bit, that’s where the “pitcher trick” came in that Milady had referred to.

You see, Bob and I went into the arts and crafts business, opening the “Robinson River Trading Post,” a small scale emporium dealing in Bob’s pottery and the crafts of several other Madison County artists. Bob’s specialty – at least the one we sold more of than the others – was pottery pitchers, decorated with Bob’s own original art. Now, I don’t want you to believe that those pitchers would not have sold on their own merits, but I’m sure they sold faster because of the trick. “First, you show them how the pitcher balances when you hold it loosely by the handle, letting gravity to its thing. Then you tell them that the perfectly designed pitcher will have poured exactly one cup when that balance point is reached.” Bob then handed me a pitcher and added the coup de gras sales pitch to the innocent customer. “You hand them the pitcher and let them balance it on their hand as you just did. Then you ask then to tilt the pitcher back and forth from upright to the balance position. After they’ve done this twice – no more, just twice – you point out to them that if the pitcher is a good pitcher it will get lighter as it finds the balance point. They will then tilt the pitcher themselves, seeing if this one is ‘good.’ If they do it more than twice, you’ve got a sale.”

And that was the trick. You notice, Bob didn’t say the pitcher will seem to get lighter. He said it will get lighter. And so it will. Try it yourself, with any pitcher. You’ll see that if you hold the pitcher with all five fingers, and tilt if toward the horizontal, it will get lighter. We-l-l-l, not actually, but it will certainly seem to get lighter. The reason for this is simple. In the upright position, all the weight of the pitcher is on the top finger, but in the tilted position, the weight distributes across the whole hand. The pitcher will definitely be lighter.

I will not recount for you the long hours Bob and I discussed this phenomenon in the context of philosophical empiricism, how the reports we make of the physical world are shaped by our perceptions as well as by reality. Instead, I’ll confess that the trick, in possession of the master salesman – me – sold so many pitchers Bob was kept up late making them. But this is by no means an apology. The pitchers are still worth every dollar paid for them, the more so now that the life-centricities of their creator have been lovingly recorded by a faithful Mouse. There is no trick at all in my telling you that graceful memories of Robert Van Kluyve will live at least as long as I do.

Saturday, April 22, 2006

A Mouse in Wolf's Clothing (Part VI)

The method. The prefix “meta-,” as in meta-physics, means “beyond.” Metaphysicians study those subjects that lie beyond physics, that is, those things that cannot be explained by a study of the physical world (hence, the scarcity of materialist metaphysics). A true materialist may wonder about metaphysical questions, but will never put trust in any of the answers. He may reply to “What’s it all about?” with a meaningful shrug, and to “Where did it all come from?” with a less meaningful equation, but his heart will be involved in neither answer. The term, “his heart,” will actually have no meaning for him (or if it does, his heart will not be in it). He will regard any emotional attachments as mere effects of atoms in motion. I have no idea how the materialist deals with his joyful attachment to materialism, but I am convinced that he does. Consequently – and almost by definition – the materialist divorces himself from metaphysical concerns. He remains comfortable with the road he travels, and would feel no urge to get off it.

The suffix “-logy,” as in theo-logy, derives from the Greek and, in nearly all cases, means “study.” Theologians thus study God (theo-). I would like to continue in this paragraph to parallel the previous one. But the days have passed when theologians could study God without considering that the physical universe might be an important aspect of God’s Being. No longer can they begin with the assumption that God is one thing, his creation another. They have to consider that the two may be one. Consequently, the theologian is less comfortable than the materialist on the road he travels, or at least less assured that his orthodox position can be maintained.

The belief is ancient in eastern cultures that God and His creation are one. The Tao-Teh Ching of Lao Tzu, which predates Aristotle by as much as two centuries, speaks of the unmoved and unmovable Tao, the nothing in and beyond all things. In the west, 2100 years later, Giordano Bruno spoke of body and spirit as one substance, and even though the details of his theories went up in smoke with his body, enough remains that we know he envisioned a theology in which God and His creation were viewed as one. Nowhere, however, in east or west, has the notion of Oneness been more convincingly described than in Spinoza’s philosophy. Those able to follow the sometimes tortured logic of his masterpiece, Ethics Geometrically Demonstrated [click on the link], will see that reality is One, and that consequently, God and His creation must in some sense be the same. After Spinoza, those who doubt the oneness of God and Nature must either doubt existence itself or content themselves, as Aquinas suggested, with a religion or a metaphysics “founded on empty reasonings.”

Of course, nothing prevents the latter. Spinoza’s method and Aristotle’s were essentially the same – start with definitions and axioms and work logically top-down from there. Spinoza may thus be doubted. Aristotle’s passel of errors can be used – and is used – as evidence against all forms of deductive metaphysics. But Spinoza did his “meta-physicking” differently. Almost immediately he linked his metaphysics to the real world, and thereafter, got results remarkably different from Aristotle’s. He started with the most fundamental premise of the physical sciences, the idea that all effects have causes, and deducing from that idea all that it implies, he demonstrated that much of what we thought we knew about the world and about ourselves was flat wrong. It was still true that logical deduction added nothing new to our knowledge, but the fact had not been fully appreciated that logic, by removing errors, could also subtract from our purported “knowledge.”

The Aristotelian synthesis developed by Aquinas was burdened with premises that were nowhere near self-evident. It treated many of his world’s current and ancient beliefs as the truth and thus could deduce no metaphysics other than one that verified the status quo. Spinoza, taking the same general approach, selected beliefs that stood a better chance of being true. Aquinas could not have adopted the law of causes as a premise because the book that served as the source for much of what he treated as the truth contains scores of causeless effects, so-called miracles. If he had used the law of causes as an axiom, he would have been forced to question the stories in the book, and that he could not do.

Spinoza was under no such compulsion. The law of causes stands front and center in his philosophy.

From a given definite cause an effect necessarily follows; and, on the other hand, if no definite cause be granted, it is impossible that an effect can follow. (E1ax3)

This does not mean that all of Spinoza must be taken as the truth. It does mean that those who believe that the third axiom of Part One of the Ethics is true, and who wish to remain logical, must believe that nothing can happen without an intelligible cause.

Please note – and be guided by the fact – that the sentence previous to this one contains two premises, (1) if E1ax3 is true, and (2) if the believer remains logical. Doubt either of those premises and all restrictions are removed from what anyone can believe. Those who fly airplanes into tall buildings and those who seek revenge in the bombing of innocent nations, are equally justified. If there is no obligation to be logical, and/or no understanding that all effects have causes, there is no obligation to be morally consistent. But once we believe both premises, and treat them with reverence, we can then be fully justified in holding madmen “of all parties” accountable for their actions. We escape the horror of might makes right only by acts of reason.

Spinoza has done nothing that any commonsensible person cannot do. He has followed Bernard Lonergan’s advice (without reading it). He has seen the data and has made sense of it, asking again and again, “is this the right of it,” trying to get it right, until at last, he has run out of questions to ask, and has thus arrived at what he calls “an adequate answer,” and what Lonergan calls, “virtual certainty.”

The careful reader might murmur, “Why are these two smart guys so hesitant? Why only adequate? Why only virtual?” Complete answers to these great questions involve a much longer story than I intend to tell, but the short version runs something like this....

Answers about answers. [I will refer to Spinoza’s thought process, calling it his, but it’s anyone’s who chooses to think reasonably.] If Lonergan is correct when he says that we humans come to our defensible knowledge by way of data, by way of analysis, by way of judgment, in that order, then Spinoza, being human, and having produced a highly defensible body of knowledge, must also have done it that way. Baruch was not born with axioms in his head. He discovered them as Lonergan suggests we have all discovered what we validly may claim to know – by questioning what we think we know and why we think we know it.

Spinoza’s thought process might be likened to an “either-or” logic machine. Either the law of causes is true or it’s not. Either a thing is self-contained or it’s contained in something else. Either God is infinite or He’s finite. And so on. Spinoza chose one alternative then the other, and observed which could be made logically consistent with the other choices. As he proceeded, a coherent view of reality gradually unfolded. When he had resolved all the “either-ors” he could think of, he wrote the book and considered his work done.

This does not mean that some later philosopher, scientist, or theologian will not think of questions Spinoza failed to ask. It also may be, as I said before, that the law of causes (or some other idea Spinoza treated as the truth) might be false. By claiming that Spinoza got it right, I do not deny that better thinkers might prove him wrong.

But so long as we trust the law of causes, and until we or someone else asks the questions Spinoza may not have asked, we can claim with adequate certainty that Spinoza got it right. If we have examined his premises, and found no fault in them, if we have looked and found no fault in his logic, then we are more or less obliged to trust that he has derived from his premises exactly what can logically be inferred from them.

But it’s really not all that important that we examine Spinoza’s premises, or that we believe his philosophy. It is, however, ultimately important that we employ his method, that we examine our own premises, that we ask all the questions, try to answer them, and finally run out of questions to ask about our beliefs. If we default to the much easier “Fiddle-de-dee,” Scarlett O’Hara syndrome, if we really don’t know (and don’t care) why we believe what we believe, we’ll have to acknowledge – personally – that we’ve opted out of the possibility of a world ruled by Reason. And since Reason, like any other talent, has real effects only when put to use, by defaulting to the unexamined, passive life, we’re in effect denying to ourselves – personally – the benefits of Reason and an understanding of all things reasonable. By taking that easy (and sure) road to serfdom, we’re saying – to put it more bluntly – that it’s really okay for foolish people to fly airplanes into tall buildings, and for other foolish people to bomb the shit out of innocent others.

But no problem. Either way, life goes on...or it doesn't.

[This concludes the six-part series.]

Friday, April 21, 2006

A Mouse in Wolf's Clothing (Part V)

Spinoza did not view human beings as bodies with souls in them, or as souls with bodies wrapped around them. The Spinozistic body and soul are not separate things, but are simply two different ways of looking at the same thing. With body and soul as two aspects of one thing, whatever happens in one also happens in the other. What appears as a motion of the body appears to the mind as the idea of the motion. As Spinoza says, “The order and connection of ideas is the same as the order and connection of things.” The problem of interaction disappears. Whatever happens in either the mind or the body happens simultaneously in the other.

Unfortunately (or fortunately, depending on who you are), while Spinoza’s rescue makes it possible for the body and soul to reunite, it also snatches the rug from beneath the feet of Descartes’ compromise. The reunification of body and soul denies them true separateness, and thus denies significantly separate roles for religion and science. Those things considered important and proper for religious man to think about, now become equally important and proper for scientific man to think about. As Spinoza might have said, “The order and connection of religious matters is the same as the order and connection of scientific matters.” In short, a man cannot truly exist who is religious but not scientific, nor a scientific man who is not religious. The scientist may not know he’s religious. The religious man may not know he’s scientific. And even when reminded of the inescapability of their dual nature, both may doubt that they are the other. But if Spinoza has got the straight of it, the truth of science (if there be such) constitutes a critically inseparable part of the religious man’s truth. That is, what’s true of God is true of the world, what’s true of the world is true of God.

Oh, and one other thing, if Spinoza is right, then the mainstream beliefs of western culture are essentially false. Aristotle permitted an error of gross proportion to endure in his work. Aquinas brought about a unification of the worst of Aristotle and the worst of religion, producing a work so shot through with metaphysical and theological flaws that by no stretch of the imagination can its literal meaning ever be made compatible with science. The Cartesian compromise, an error of first magnitude, left religion and science to walk their separate ways, both headed in the wrong direction. The common people of the world – those who are in fact the world’s critical mass – still think of science and religion as separate areas of concern.

One may doubt that science and religion will ever be routed back into the single path where they belong. Scientists trust nothing that cannot be demonstrated by repeatable experiment. Religionists regard such demonstrations as of trivial importance, with the real values resting upon articles of faith that, by definition, cannot be demonstrated. [See Hebrews 11:1.] Both the scientists and the religionists have so deeply implanted the separateness of their domains in the psychological makeup of the human race that, even if they were to admit their errors and seek to change direction, the chance seems remote that the new course would be adopted by humankind. Some stupid people are flying airplanes into tall buildings because they believe what their religionists tell them. By way of revenge, other stupid people, by implying that they speak to God, are justifying the destruction of innocent nations. So-called rational scientists, while admitting placebo effects, still deny the mind a place in the eternal chain of causes and effects. In some of their most socially meaningful theories (I have evolution theory in mind), they refuse – almost angrily – to admit mind as a causative agent. To the extent they believe that minds exist, they seem content to live with Descartes’ error. What method can possibly change any of these people – scientists and religionists – into truly rational human souls?

[To be continued]

Thursday, April 20, 2006

A Mouse in Wolf's Clothing (Part IV)

It may not be true that modern science started with Copernicus’s “simple” leap, but without going into unnecessary detail, I’m going to say that it did. In the next half-century, Tycho Brahe observed and recorded the seasonal movements of the planets. Johannes Kepler, using Brahe’s numbers, developed a mathematical equation describing the orbits of the planets. Galileo defended the Copernican system, and for doing it, got in trouble with the church. The fact that he yielded in the face of a death threat from the Pope did not, and could never have impeded the growth of the scientific spirit. After Copernicus, Aquinas’s diligent knitting together of science and religion started to unravel.

Spinoza to the rescue. To understand Spinoza, one must first understand Descartes. Like Aristotle, the inquisitive Frenchman, Descartes, was also a beast with more than one head. Today, he is more famous as a mathematician than as a philosopher. His Cartesian coordinates still drive fledgling algebraists up the wall with their clever plotting of positive and negative numbers and plain old “signless” zeroes. Also like Aristotle, Descartes’ reputation as a philosopher rests more upon the method of his inquiry than upon its product. Aristotle had his logic, Descartes his doubts. Both are also remembered for the importance of their mistakes. We have more or less outgrown the errors embodied in Aristotle’s metaphysics, but Descartes’ blunder is still with us.

Descartes grounded his philosophy in the notion that only those things that present themselves to us clearly and distinctly can be treated as facts. No problem, but the Christian religion included several fundamental beliefs that were clearly and distinctly not clear and distinct. No one claimed to understand the mystery of redemption, the arithmetic of the trinity, or the miracle of the Eucharist. Descartes saw where his demand for clarity might lead. He believed, as did most everyone else, that people conducted themselves in a moral way only because they feared hell’s fires and/or desired to live in heaven’s eternal peace. He thus believed that if the common people ever came to doubt their religion or the authority of the church, all hell would break loose. Descartes saw a way to resolve the problem. His “clear and distinct” philosophy included the assumption that mind and matter (Thought and Extension) were two fundamentally different substances. With a mere twist of his philosophical wrist, Descartes negotiated a compromise between religion and science. He delegated governance of the spirit (a mind thing) to religion, while reserving for science the study of the material world – a Cartesian rendering of Caesar from God, and God from Caesar.

Since some elements in the church hierarchy (the power in those days) regarded “the flesh” (the physical world) as evil, Descartes’ compromise caught on almost immediately. The church considered itself privileged to be left in charge of the immortal souls of men, “clearly and distinctly” their better part, with scientists free to make what they could of the corrupt and despicable “flesh.” So the compromise looked like a great idea. Unfortunately, it rested upon the mistaken belief that mind and body could be separated.

The excommunicate Jew, Baruch Spinoza, immediately saw the mistake. Well schooled in Descartes’ philosophy and the Hebrew religion, and self-taught in the philosophy of the school men (Aquinas & Co), Spinoza was convinced that only one true substance could possibly exist. Common sense told him that material things could cause effects in other material things, and that mind things (ideas) could lead to (or cause) other ideas. But no material thing could cause an effect in a mind thing, nor a mind thing in a material thing. Descartes had also seen this problem and had insisted that the mind and the body were two different substances but that they interacted at the pineal gland, a place in the brain. Spinoza – stuck on common sense – realized that the pineal gland, though small, was still physical and the mind still incorporeal, so it didn’t much matter that Descartes had a name for the interacting place. A spiritual mind and a physical body would still have to interact.

But if Spinoza’s commonsensical idea was correct – that body and soul could not interact – Descartes’ division of body and soul into two separate regimes would imply that the soul could in no way participate in the day-to-day activities of the body. Even if we imagined the reality of something like human Will, it would still be a mind thing with no means available to it to control or even to participate in the life lived by the body. We might have a soul, but it would do us no good whatsoever.

What’s a poor mouse to do? Well, as I said, “Spinoza to the rescue.”

[To be continued]

Wednesday, April 19, 2006

A Mouse in Wolf's Clothing (Part III)

Before the Christian conquest of Moorish Cordova in 1236 AD, the European world had been essentially ignorant of Aristotle and his logic. Men were no doubt logical before 1236, in that they always seemed to know the difference between “is” and “is not.” We might reasonably conclude, though, that Christian theologians before the liberation of Cordova felt no compulsion to take seriously any “pagan” arguments related to the nature of God.

We may also conclude that the theologians were not the only class of people awakened by Aristotle’s insights. We are forced to observe – because it’s true – that some of the awakened souls took up a different aspect of Aristotle than Aquinas did. The free thinkers – if we may call them that – were not so much in thrall to Aristotle’s philosophy – deduction from first principles – as they were to Aristotle’s other calling. Aristotle was not only a great philosopher; he was also a great scientist. He was the first man to group organic things by types. He did that by looking at them, describing them in detail, and taking note of their similarities. Of course, he made mistakes, but when you consider the superstitions and wild-ass schemes he had to overcome in order to think as clearly as he did, he stands out like a star in the east.

But as great as he was as both scientist and philosopher, Aristotle never made a consistently logical connection between his purely metaphysical studies and his work in science. He must occasionally have imagined that the objects he studied as a scientist (by looking at them) and the Prime Mover he studied as a metaphysician (purely by thinking) were connected by a chain of causes and effects, but he apparently never made a serious effort to spell out that connection. Reasoning metaphysically from the top down, he had compartmentalized his studies, giving everything a place in an ordained and unchanging hierarchy, a sort of divine ladder. He placed the human world higher than the world of the lower animals, the animals higher than plants, and plants higher than earth. He similarly located the four elements, earth, air, fire, and water, with fire being “up” and earth being “down.” If Aristotle had looked carefully (or seen better) he might have attributed the “upness” of fire and the “downness” of earth to physical causes. Instead, he placed them in an imaginary structure. Given his metaphysics, it would have been difficult for Aristotle to learn that objects in motion tend to remain in motion.

The awakened scientific minds of the 15th, 16th, and 17th centuries began to suspect that meaningful relationships between air, earth, fire, and water could be learned by Aristotle’s “scientific method,” that is, by taking a look. The scientists of the Renaissance were not seeking merely to place things in a proper taxonomy. The “modern” lookers were seeking to understand the behavior of things. Why does smoke rise? Why do falling objects accelerate? And because mathematics had also been discovered almost whole in Cordova, the lookers began to wonder if the certainties of that science could be used to express and predict the behaviors they were seeing. They also began to think the unthinkable, that in places, the holy books might be wrong.

Some of Aquinas’s own religionists contributed to the scientific movement. The Bishop of Ockham, no stranger to “scholastic stink,” declared that, among alternative theories, the better is the one that explains observed data in the simplest way. The Bishop was actually trying to defend traditional faith against the Aristotelian “paganism” introduced by Aquinas, but Ockham’s Razor – as his ideas came to be called – served doubt just as easily as it served belief. The Ptolemaic theory – that the earth was at the center of the cosmos – had been accepted for 1600 years. Like most false theories, it started out simple, but became more and more complicated as new facts about the movements of heavenly bodies came to light. If Ptolemy’s theory had remained simple, Copernicus may never have been urged to seek and discover the truth.
But sometimes simplicity also delays discoveries. Two-thousand years before Copernicus, a Greek by the name of Aristarchus had come up with the right idea, that the sun, not the earth, was the center of the solar system. Another Greek had calculated the distance between the earth and the sun, making it out to be 83 million miles (only 10 million miles short of the truth.) That calculation proved to be the downfall of Aristarchus’s theory. If it were true that the sun was at the center, and the earth a mere orbiter, and if it were true that the earth was 83 million miles from the sun, then the diameter of the earth’s orbit around the sun would be 166 million miles. If those premises were all true, then “how come” all during the year the fixed stars always seemed in the same relationship to the earth? It would seem that if the same star were viewed from two different places 166 million miles apart, the stars should appear to have moved a great distance across the field of view. Greeks abandoned Aristarchus’s theory when they could not conceive that the stars were so far away from the earth that 166 million miles would be less than a fly speck. Copernicus simply accepted Aristarchus’s premises and concluded that the stars are vast distances away from us.Perhaps fearing for his immortal soul, Copernicus permitted his theory to be published only after his death (or shortly before if some stories can be trusted). Considering the vastness of the universe implied by his calculations, Copernicus must surely have wondered with the Psalmist, “What is man that Thou art mindful of him?”

[To be continued]

Tuesday, April 18, 2006

A Mouse in Wolf's Clothing (part II)

Skipping ahead to the 13th century we find Thomas Aquinas weaving the divergent threads of Aristotle’s logic and the revealed religion of the prophets into a unified fabric. Unfortunately, in the centuries between Aristotle and Aquinas, a remarkable thing had happened. The followers of three of the great western religions had become “people of the book.” God’s word had been written down. It was one thing for God to reveal Himself to prophets, priests, and shamans, quite another to have His revelations recorded in a book for all to see. Prior to the book, God’s prophets, doing what advisers to presidents, kings, and prime ministers do, could render practical prophesies, appropriate to their time and place. They were under no compulsion to be consistent. But the written and authoritative words of God compelled the prophets, not only to give good advice, but to maintain uniformity with past prophesies. If for example, it were written that God had told an ancient prophet that a certain piece of real estate was to belong to a particular tribe forever, that fact had to be taken into account.

The written word of God had another awkward effect. It documented some of the myths the ancient people had used to explain things to themselves. As human beings gradually learned more about the causes of things, it became apparent that the myths sometimes contained false or conflicting details. But because the books were taken as the word of God (or Allah), the myths were difficult to disbelieve. Until the 16th century we believed the earth stood at the center of the universe. More outrageously, we believed until a much later date (early 19th century) that the universe was created in 4004 BC. Many of us still believe that the universe and everything in it was created from nothing in six days.

So when Aquinas sat down to mend the rift between reason and revelation, he was forced to regard every declarative statement in the book as a self-evident truth. Every one of God’s pronouncements presented him with another unalterable fact. In several ways, that made his job easier. For one, the book provided a reasonably accurate historical structure upon which the acts of God and man could be arrayed. Aquinas could also simply disregard as errors anything Aristotle or anyone else had said that contradicted the book. But the book’s drawbacks far exceeded its benefits. The myths had to be made to fit with physical and historical reality. Will Durant wrote, for example, that Aquinas more or less agreed with Aristotle and Averroës (a Moslem philosopher) that the world is infinite, and that “the arguments offered by the theologians for creation in [historical] time are weak, and should be rejected ‘lest the Catholic faith should seem to be founded on empty reasonings.’” Durant goes on to say that Thomas nevertheless concluded that we must believe on faith the creation story as related in the book. Thus, for Aquinas, the precedent in the book had to be followed regardless of what reason may have told him. Aquinas also describes in great detail a hierarchy of angels and archangels, assigning duties to each level of angel, establishing their existence as a matter of logical fact.

Despite his proclivity toward credulity, no one should dismiss Aquinas altogether. Even though cynics like James Joyce might refer to Thomas’s work as “scholastic stink,” Aquinas was a genius who understood his first principles and, using Aristotle’s logic, deduced from them a complete and coherent religion that contradicted nothing in the book of God’s word. He even managed to demonstrate how some of the traditional (non-scriptural) practices of his church fit into the saga of God and man. By Aquinas’s inspired reasoning Aristotle and the prophets were made to speak with one voice. But Aquinas had overlooked an important reality – the human desire to know knows no boundaries.

[To be continued.]

Monday, April 17, 2006

A Mouse in Wolf's Clothing (Part I)

I have burnt out 23 brain cells trying to identify and communicate a way out of the difficulty Plato let us in for with his dialectic. He asked us to question our beliefs, but the demonstrations he gave us (Socrates versus the world) involved a smart guy questioning the beliefs of lesser men. This set us up for rule by elitist deception, a solution most folks would not choose. Philosophers don’t even know what “dialectic” means. Plato said it was the key to genuine knowledge, and yet never in so many words described it. The Greek word for “dialectic” means conversation, something like dialogue, and sure enough, Plato presented all of his ideas as conversations. But then came Plato’s pupil, Aristotle, telling us dialectic is nothing more or less than formal logic, a conclusion which, if correct, would put an end to the Greek meaning of dialectic; logic resembles conversation only when carried on among pointy-eared people.

After Aristotle’s assessment of dialectic as “nothing but” logic, skeptical people began to doubt that any form of human reason could lead to unquestionable truth. We learn nothing from logic we did not already know from our premises. We declare that all men are mortal and that Socrates is a man, so add nothing new when we conclude that Socrates is mortal. Aristotle realized this weakness. He felt logic could be used only to test hypotheses, not to form new ones. Logic is thus a one-eyed guide similar to Socrates’ Daemon, telling us when we are wrong, but saying nothing about what’s right. Driven by the emptiness he perceived in logic Aristotle came to believe that the path to true and everlasting knowledge had to start with self-evident first principles that needed no proof. Properly applied logic would then assure that we admitted as fact only those ideas derived from “absolutely true” first principles. Aristotle called his most famous first principle, “First Cause,” by which he meant God.

Several centuries before Aristotle went into the philosophy business, people living all over the world had found a different path to knowledge. Driven by the same desire to know that drove the Greeks to their philosophies, an even more ancient inquisitive people had also asked the what-and-why-fore of the world around them. Thinking that everything must have a cause, and unable to see the cause of many things (like their own existence), they derived their own first cause and also called it God. Aristotle got to the supreme being by reason, the prophets by revelation. They both had found ideas that were too good to doubt.

Many people regard revelation and reason as remarkably different concepts. Actually, they’re not too far apart. Both begin with an idea that supposedly needs no proof, and their proponents believe nothing that cannot be shown to be consistent with that self-evident idea. Reason and revelation finally drift apart, but not because either of them is illogical. Neither is. They diverge because the God revealed to the prophets was almost invariably a “person” who was concerned for those to whom He had revealed His truth.

And why not? God would surely not have bothered to reveal Himself to human beings unless He cared for them. A neutral God might as well have remained silent. But God was not neutral. He spoke to His prophets, and they spoke to the people’s leaders. The nations were thus ruled (more or less) by the indirect word of God. I say “more or less” because, politics being what it is, the kings often disregarded the prophets’ advice, and this always led to trouble. God cared for His people, so He was often forced to chastise them for the acts of their disobedient rulers. He thus became not only the First Cause, but also, on occasion, became the immediate (or proximate) cause of things that happened in the world. He parted the Red Sea. He caused the sun to stand still. He sent His son into the world. Involvements of that sort do not follow from Aristotle’s version of First Cause/God.

Aristotle’s God has another name, Prime Mover. When God goes by that name He can be recognized as a God who, after setting things in motion, steps back from His work, no longer concerned. This is the God of many of our nation’s Founding Fathers, the deistic God, the Masonic God, and perhaps the God worshipped by scientists who claim to worship God. Whoever they are, those who believe in the deistic God believe that the causes of things can be found, not in the willful acts of God, but among things themselves. The scientists in particular, would frown upon a God who changes the laws of Nature. How could they ever figure things out if the rules were continuously changing?

[This will -- I think -- culminate with an explanation of why people fly airplanes into tall buildings.]

Saturday, April 15, 2006

A Mouse-child's Mardi Gras Memory

There are no cats on Dauphin Street. They’re in the tamales, father said, sold by the hairy-handed man wearing the monkey suit who turns the crank on his hurdy-gurdy Maxie called it when the parade came late and cotton candy smelled of dinner time. The child was lost, long before, afraid and wandering in and out of the great cathedral, silent afraid to wake the saints sleeping ‘neath the 3-ply carpet there, where stood the marble bowl with the blood of Jesus and who would think to drive nails into the feet of the nice-a bearded man who suffered the little children. Wooly-eyed priests lounge forlorn on the concrete porch, Lent-fearing paraders floating go by, catching with their empty prayers virgin serpentine still wrapped in cellophane, un-pierced smelling of Mobile summer still here in February not quite gone. If I stand here by these backward collared killers will I die as mother claims? Saint Bartholomew mrdered her mother four centuries before, 55,000 on his day, killed them with fire, stood them beside a post, kindled round with kindling and kindly set them all afire, their smoke smellable 10,000 years. I will go hence from here to wait by the scale they weigh on at the bakery where pictures of cream puffs line the walls and gracious odors lading the air with the smells of wealth. She will be there where the scales are free and the great moon face of the dial where the weights are told shines like the moon.

She must have come . . . at last . . . or the mouse-child would still be lost . . . in Wragg Swamp watching Comic Cowboys rope Hitler and Mussolini, selling their carcasses still warm to the Krew of Columbus revelers, Death chasng Life around a pole, beating the flying fire out of him with what Momma said were inflated pig bladders but looked to the child like hard balloons with exploding firecrackers set off by Life's sizzling ass when Death's aim was good . . . and confetti fell like stars in distant skies along the streets stunk up by burning flares the night the floats came by, pulled by careless horses men were paid (they said) to clean up after . . . and now that the days have died, seems a helluva sorry way to make a living, no way to see the floats and the drunken mountebanks, masked to hide their moles, she said, beauty best beheld beneath dark veils, or so said sweet Isabella bargaining her virtue for a guiltless sibbling's life in a Mardi Gras made real on a London stage.

Dreams no longer dreamt, golden crosses bent by falling planes and dying men . . . in a heaven of steeplejacks working without a net. Brave days.

Friday, April 14, 2006

The Rights of a Mouse

Human beings, by their nature, have identified – and accepted – the criteria by which we learn the truth of nature. We are, therefore, free to make mistakes. Few would argue with the conclusion that we sometimes get things wrong, but ubiquitous others would reject the idea that we ourselves have made up the rules for telling the difference between right and wrong. But even in the simplest cases, we cannot pass the credit to external nature or to God. We learn, for instance, that we cannot cause our bodies to pass through solid objects, like closed doors. Nature dictates that doors must be opened before passing through them, but what we learn from the nature of doors and other solid objects remains exactly nothing until we make in our minds (or our minds make for us) a rule to be obeyed. It is conceivable that we might forever continue bumping into doors. That’s what rocks do; they just keep on bumping into each other, never quite realizing there’s anything wrong about it (which, of course, there isn’t...for rocks).

If all lessons were as easily learned as the lesson taught us by collisions with closed doors, questions of right and wrong would perhaps never come up. But difficult cases do come up, and one of the reasons they do traces to carelessness in the way we use the word “right.” We know that it is right to open doors before walking through them, but when we elevate the rightness of that practice into a “right,” as in “I have the right to open doors before walking through them,” we’re talking nonsense. If I am healthy and human, and the door can be opened, I have the power to open the door. I can turn the handle and pull (or push, as needs be). But when I equate my power to act with a natural right I set myself up for a gross set of errors. Rights come into being when two or more creatures agree to restrict their power. In a state of nature, as imagined by Hobbes and others of his ilk, nothing like rights would exist, only powers. Acts of agreement must take place before the word “rights” has any meaning.

In the case of humans, we may, in a state of nature, have possessed the power to acquire and keep property, but we clearly, in that state, would have no right to property, since someone stronger than ourselves may have the power to take it from us. Without some sort of agreement, and the means to enforce it, the stronger human could do as he pleased with impunity. Rights would not exist, only powers.

It may be argued that in a state devoid of formal agreements regarding rights, those possessing property have the right to defend it. But what in this case we have referred to as a right is in fact only a desire. The very act of possessing property implies a desire to keep the property. A Neanderthal may possess a favorite club, and desire to keep it, but our Neanderthal has a right to his club only after he and those similarly inclined have expressed their desires, and agreed among themselves to respect each others’ property. Rights are what others must respect. In the state of nature, the only respect you and your property will get is that which your power commands. Until we reach agreements establishing rights, what we are prone to treat as “our rights” is nothing more (or less) than the feeling we get when we encounter obstacles to the fulfillment of our desires.

Imagine a man, Freddie, who possesses a gun and feels that he has the right to possess it. Another man, Adolph, has a bigger gun and feels that he has the right to possess Freddie’s gun in addition to his own. The conflict between Freddie’s and Adolph’s “rights” comes about because they’ve used the word “right” wrongly. If we say that Freddie and Adolph both desire to possess Freddie’s gun, the conflict disappears. Freddie’s right to his gun becomes evident only when Freddie and Adolph have agreed between themselves (or both live in a community that has agreed) to recognize the rights of property. Otherwise, “might makes right.”

But even in the great western democracies, the notion that “might makes right” is not dead. It appears in one of its sneakiest forms when the winner of a gubernatorial or presidential election claims that, on the basis of his victory, he has a “mandate” to put his pet theories into effect. With the entire might of the state now at his disposal, he feels the laws of the land that stand in his way can justifiably be changed. This might be the case if his victory were of landslide proportions and if he had made his intentions clear during the election campaign, especially if his intentions were singular. But if he ran on a platform of many issues – which is normally the case – it is quite likely that nowhere near a majority of the electorate approved of every one of his proposals, perhaps none of them. But no matter. He now feels he has a mandate...and might makes right. Such claims are to be especially feared when the normal checks and balances of democratic systems have been mitigated by one party’s domination of all branches of the government. The diffused might of a republic then becomes the concentrated might of an autocracy.

This is not to say that an elected official must never seek to implement his ideas. It is to say that the use of terms like “mandate” should be treated as propaganda, nothing more than an official’s attempt to ascribe to his ideas a power they may not possess. If such a claim were not intended as propaganda, it should be treated as a prosecutable usurpation of power.

To bring the matter of “rights” closer to home, consider the Bill of Rights as detailed in the United States Constitution. We’ll take the first of them as the one containing the most fundamental of the rights established by that document.

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

The lead words, “Congress shall make no law,” are often taken to imply that the various rights enumerated in the First Amendment were always and ever natural rights, and that the amendment aims only to protect those natural rights from Congressional interference. To demonstrate the error in thinking that way, let’s look at the first right mentioned, religion.

We can grant that in a state of nature certain individuals may experience religious feelings which they desire to express in a ritualistic manner. And we may grant that other individuals in that same state of nature may desire to express their religious feelings differently. So long as the parties find no cause to interfere with each others’ rituals or beliefs, no question of rights would surface. All would have the power to fulfill their religious desires. But when one party decides that another party’s rituals are wrong and decides to do something about it, the question of rights may come up. If their society has not evolved to the point where agreements are possible, the parties may simply fight it out. Might would prevail, and the development of legal rights would remain a non-starter. But if we assume a more developed community, the parties – after a few brawls – may come together and reach some sort of “religious rights” agreement. They may decide either to permit only one practice or to permit (or deny) them all. Great Britain, from which most of the original settlers of the American colonies emigrated, had formalized an agreement in which only one form of religion was to be practiced. That agreement did not sit well with many of the citizens of that nation, and some of them left precisely because they felt their religious desires were being unfairly restricted. They did not like the rights agreement their government had imposed.

But religious rights for Brits did not exist before the Reformation. No one had seriously questioned the right to be a Catholic for several centuries, but the right to question the authority of the Roman Church had been warmly suppressed on many occasions. The right to practice a religion other than Catholicism came into being (permanently) only when the ideas of Martin Luther, John Calvin and other protestors won the day. Following Henry VIII’s separation of Britain from the Catholic church, Brits could have established a liberal religious rights clause in their body of law. Instead, they cast aside that possibility in favor of a one-religion policy, in effect, establishing rights exactly as they had been established in the past, merely substituting the Anglican faith for the Catholic.

America’s founding fathers, well aware of the turmoil religion had brought about in the mother country, had to decide what to do about religious rights. They opted to say in their constitution that the government of the United States was not going to get involved in religious matters, that they were going to leave it up to individuals to decide which, if any, religion they wished to practice. The right to absolute religious freedom for American citizens thus came into being in 1791 when the First Amendment to the Constitution – along with nine others – was ratified into law. The American courts have decided that, by implication, the First Amendment obliges the United States government to assure that no one tampers with any citizen’s religious desire, or their desire to be non-religious.

Similar analysis of the other rights protected by the First Amendment would reveal similar histories, each of them coming into being out of the evolution of agreements, some of which succeeded, most of which failed. That the American agreement – as reflected in its constitution – has succeeded as well as it has, traces to the fact that the orderly structure created by the American Constitution multiplies the people’s power to satisfy their desires. Within the broad confines of that structure, everyone has the freedom to desire what they will. In the state of nature we could fulfill our desires only to the extent of our individual power to do so (or to the extent of our power to create and maintain cooperative order). In a constitutional government, where the full might and power of the state protects and promotes legal cooperation among people, desires can be more easily satisfied.

More meaningful questions concerning the evolution of rights come up when we take seriously the possibility that the so-called state of nature might never have actually existed. Had the Hobbesian state prevailed on earth, would it not be true that none but the powerful would have survived? How would rights have come into being if the strong were always in charge? To answer these questions, I would need more space than I wish to take (for free) on this particular morning.

Hmmm. Perhaps my decision to abstain from taking advantage of "free stuff" moves in the direction of an answer. I'll give this some thought.

Tuesday, April 11, 2006

A Mendacious Triangular Trade (Part XI)

So, here in the States we find ourselves in a tricky situation. In the midst of a putsch by the most populous nation on the globe to replace us as the economic superpower, we have set out to democratize the middle-east by military -- i.e., costly -- means, an undertaking that would have to be successful in choking off oil to China or play no part in defending the nation against the real threat.

Quite coincidentally, the local political scene has been co-opted by a two pronged attack from the far right. One jaw of the pincers hinges its forces on so-called "growth," the other on an equally misnamed appeal to our "moral sensibilities." It seems to have been completely overlooked by the American people that these two forces are perennial opposites. The driving force of economic growth has always been greed, every man for himself in a struggle for wealth. Religious man has (admittedly less noticeably) centered his inmost desire on caritas, or agape, lovingly translated in First Corinthians 13 as charity. Hard to see how those opposites could have gotten together, but they have, and that's a fact.

I will leave analysis of how this happened for another time. I wish today to focus on one of the unintended, and possibly beneficial, consequences of this strange alliance. I say "possibly beneficial" only in the long term sense. In the short term -- the next three to ten years -- the policies currently being pursued by America's leadership are almost certain to result in painful sacrifices, especially on the American middle and lower classes. The "growth" mongers seem irretrievably committed to the libertarian dream, a government that functions only as an army, a court, and a police force. But because the other side of the alliance remains entrenched in the entitlements status quo, the drive of the "greeds" toward "no more taxes" will collide with a rock-hard base of nondiscretionary spending commitments. Unless the libertarians give up the chase -- which I doubt -- virtual bankruptcy will result.

To urge the slide toward America's capitulation, the Chinese will (in a couple of years) begin to restructure their currency reserves, shifting from dollars to euros (or the yen, which has firmed as the Japanese economy recovered). To do this, the Chinese will not merely refrain from buying U. S. debt, but will begin slowly to sell off the bonds they currently hold. This will pressure the dollar to the point interest rates offered to the U. S. central bank will be forced to climb. When that happens, the American economy will stagnate, shiver, and collapse. It will not help a bit at that point that sanity returns to the government. The damage will be done, and in the real "new American century," the U. S. will have been reduced to a third rate power. [The Brits will fare better. They have more experience with economic warfare than we do.]

But as I say, the alliance between greed and charity may have a beneficial effect in the long run. If the religious right finally catches on to what is expected of it in this game, it will use its holy pulpit to assuage the fears and pains of the faithful, enabling the great masses of the marginally employed to bear the depreciation of their wages which is, at the bottom line, to be the one great hope for the American economy. The Chinese skirmish line cannot hold if the difference between American and Chinese wages narrows so significantly that shipping costs can make up the difference in the cost of Chinese- and American-made goods.

American policy should, therefore, in the meanwhile focus on steeling the American workforce for the hard times that are bound to come, while at the same time, doing all it can peacefully to increase the level of unrest among the wage earners in China. The U. S. government might also seek to balance its spending with its income. If that means more taxes and fewer entitlements, so be it. The stakes are too high for quibbling on the details of the defense.

True, in the on-going meanwhile (where we all live), if matters proceed as they are for too much longer, the American manufacturing infrastructure will have shriveled almost to non-exixtence. But in the long run the American economy will be improved by the leveling of wages to something near the world median. That will be painful, especially if the government and its sacerdotal allies do not see their duty clearly and do it. If the ruling powers could see fit to come clean with the American people about the real threats they are facing, giving up idealistic ventures into real-politik, the nation might even survive as a world power.

We shall see.

[This completes my sometimes diverted analysis of "a mendacious triangular trade."]

Sunday, April 09, 2006

A Mendacious Triangular Trade (Part X)

Before addressing the economic impact of America's "immigration policy" I need to add a couple of things to my remarks of yesterday, particularly those relating to China's policy of lending money to its customers. I'll do that and then get back as quickly as I can to the "Mexican problem."

Unlike a capitalist government, the Chinese cannot lend large sums to its local entrepreneurs, primarily because there are so few of them, and secondarily because China seeks to limit the growth of privately owned industry. What better way than to dribble capital to them by measured degrees. China is thus forced to distribute its considerable profits elsewhere. Three distinct "consumers" come to mind: (1) increased wages to its industrial workforce; (2) improvements to its industrial infrastructure; and (3) loans to foreign governments.

As I briefly mentioned yesterday, China's overall economic policy would be disrupted by increasing wages to those working in "the trade," so that avenue, though receiving dribs and drabs, consumes a very small proportion of China's surplus. Infrastructure investments were front and center for the first three decades of Red China's industrial growth, but after the break with the USSR was patched, and after rapproachment with the U.S. (in Nixon's and Carter's years), the Chinese gradually shifted toward investing in foreign debt. China recently replaced Japan as the holder of more reserves denominated in foreign currencies than any nation in the world.

So, why would China become a major league lender -- apart from, as I said yesterday, keeping its customer base solvent? One of the reasons is implied above. As the outright owner of the industrial base, no profit -- of the interest on money sort -- could be made by lending to itself. A second reason lies in the very practical fact that a creditor is not likely to do great harm to its debtors. By holding the paper of its near neighbors, China lessens any perception of itself as a likely invader. Even the Taiwanese, as much in debt to China as anyone, do not seem to fear an outright takeover, though there are probably other reasons, hidden from view, that lead to that confidence. I suspect that the rate of interest China charges Taiwan busnesses is high enough for China to refrain from disturbing what may be a major source of income. Similar reasons underlie China's very liberal relationship with Hong Kong, which it now owns and could expropriate with impunity if it chose.

This same line of reasoning does not, however, explain China's proclivity to lend money to the U. S. The restraint that creditors show toward their debtors does not work the other way around. Some historians have even suggested that one of the primary reasons for the American revolution was the size of the debts owed by American land owners to British banks. China must certainly fear the military might of the U.S. and the warlike nature of its leadership. The Americans have certainly never demonstrated a reluctance to take dramatic action in the face of "the yellow peril," though Mr. Eisenhower did so in his denial of a nuclear solution to France's empasse at Dien Bin Phu. I suspect that China's neutral position on our Iraq and Vietnamese wars is explained by China's desire to see America expend its belligerant energies. Certainly, the current conflict must please the Chinese. America's resources are wearing altogether too thin to imagine that any American administration within the next decade ot two could seriously expect success in a war against a major power. China's similar distancing of itself from Israel and its Arab opponents speaks to the same "let's you and him fight" motive. The Chinese have no doubt read the Tao Teh Ching more carefully than we have, with its advice to rulers that they should watch as others fight.

Well, that's about all I wanted to add to yesterday's "China thing." Now to get to the promised topic.

From an economic point of view, the large influx of Mexican labor is a win-win affair for both nations. The Mexican people obviously benefit; back home they would live off a subsistance economy, trying to beg and barter a living off the wealthy families that rule that oligarchic land. In America, though the wages paid to immigrant labor are not all that great, they exceed the zero the Mexicans were earning at home. So accustomed are the Mexican people to living off virtually nothing, some of the illegals are able to send a part of their wages to their relatives in Mexico. That pittance, small as it is, represents the only conceivable economic drawback to the situation as it might be seen from the American point of view.

The benefits far outweigh that deficit. The approximately two-and-a-half million undocumented immigrants in this country earn about $2.30 less per hour than would Americans doing the same work. Given a 40 hour work week, that amount represents almost 12 billion dollars in saved labor costs per year. [(40 X 52) X 2.30 X 2,500,000] And given that nearly all the earmings of immigrant labor are spent in the U.S. that amount is practically all scored to the profit side of the national ledger.

From this huge savings must be subtracted the costs associated with schooling the children of immigrant families. I do not know the exact number of children involved, but I do know that roughly three-fourths of the laborers are here alone, their families, if they have them, remaining in Mexico. If those roughly 600,000 married workers, some of whom, husband and wife both work, have two school-age children each, the total cost of schooling the 1.2 million children (@$5000/each) would be about three-billion. As can be seen, the savings exceeds the costs by more than nine-billion dollars.

The U.S. does spend a large amount of money patrolling its borders, but most of those amounts would be spent even if the number of illegal immigrants were much smaller. If the numbers cited above are anywhere nere correct, every would-be migrant worker apprehended and sent back to Mexico costs the U.S. economy about $4800 in labor cost savings. It nevertheless does not follow that the U.S. should open its borders to anyone who wishes to immigrate. Even if in a perfect world the job market might be the best way to control immigration, the turmoil created by fruitless migration would certainly place a burden on America's social services and on the unemployable immigrants. Still, a guest worker program opened to a great number of people, would probably result in a more rational form of control than the current system, which is essentially no system at all.

But of course, economic values are not the only values involved. Unenforced laws, however foolish some laws may be, detract from the civility of all laws. The status quo, in which immigration laws are applied on a catch-as-catch-can basis, claws at the foundation of lawful society. A documented guest worker program would at least restore a modicum of respect for the law, and more importantly perhaps, remove "illegal immigration" from the political arena where both the demagogues and the "bleeding hearts" are wasting much of their time and energy.

In the meanwhile, the American people might want to see it this way: Mexico is "exporting" a commodity to us, free of charge, a commodity worth approximately nine-billion dollars per annum. It would be nice if they would continue to make that gift to us, but alas, I fear that if the immigration northward continues at the current rate, the supply of exportable Mexican labor will be exhausted before the end of the next decade. But then, as they say, all good things must come to an end.