A Revolution In information Technology

On or about this date – some sources cite February 24, others February 23 – in 1455, a watershed event in human history occurred. There was no upheaval, no violent conflict, nor even much note taken of the event at the time, yet it arguably changed the world more profoundly than any other single event of the last millennium. In the town of Mainz, Germany, the workshop of goldsmith Johannes Gutenberg began the production of the renowned Gutenberg Bible (also known as the 42-line Bible to distinguish it from a later Gutenberg production, the 36-line Bible.) This edition produced a rather small run of copies by later standards – perhaps some 200 copies, of which 180 survive – over the course of about a year. But that represented an unimaginable advance in information technology. From the very dawn of the written word until Gutenberg’s invention of the movable type printing press, all written words were produced laboriously by hand. The cliche of scribes laboring for years to produce a single volume are quite authentic. A well-staffed scriptorium might produce one copy of The Bible in a year’s time. Gutenberg started a revolution!

Gutenberg’s Bible was not the first printed item his workshop produced, but it was the first full book. Very quickly the news of Gutenberg’s invention spread across Europe, and within twenty years every major urban center had printing presses operating. The impact of this revolution is literally incalculable. Books suddenly became affordable to private citizens, where previously only large corporate entities such as monasteries and universities could afford them. The situation is rather analogous to the more rapid evolution of computers: originally only big businesses, governments, and universities could afford the immense investment in early computers. When microcomputers hit the market, they were still very much a luxury item. (Today, of course, it seems that everyone has a computer!)

Within a generation a profound cultural change swept across Europe. The printed word meant that consistent, accurate copies of information could be widely disseminated across Europe, and the effect was to vastly increase the rate of learning and development within Europe. By the year 1500, virtually every piece of literature from classical antiquity had been produced in a printed book. More profoundly, new works were being printed. In the days of hand-written texts, there was not a notable demand for newly written books; teachers simply passed along their knowledge to their students through the spoken word, scientists communicated personally through letters. But with printing, the demand for recognizable expertise rose, and with it the notion of the value of authorship. The printing press is responsible in a real and direct way for the Renaissance and the Scientific Revolution which followed. It also contributed enormously to the Protestant Reformation of the 16th century. (Ironically, among the very first printed texts produced by Gutenberg’s press were Papal Indulgences, which Martin Luther specifically condemned in his famous 95 theses. Luther’s 95 theses were widely distributed in printed broadsheets, forerunner to modern newspapers.)

Gutenberg certainly did not invent the press itself, for screw presses to produce wine and olive oil had been used since the days of antiquity. That basic design required little modification to serve as a printing press. It is thought that some sort of printing press had been employed to produce copper plate or woodblock prints of single sheet documents well before Gutenberg’s time. Gutenberg is also not the first individual in history to conceive of movable type, an honor which, as seems so often to be the case, apparently belongs to the Chinese, inventor Bi Sheng creating clay type stamps about 450 years before Gutenberg.) But Gutenberg’s uniquely valuable contribution was to combine the press with practical movable type. His knowledge of metallurgy led him to create cast metal type, using lead, tin, and antimony, which remained the essential “typemetal” until modern offset printing replaced handset type in the 19th century. It is not known if Gutenberg invented the punch-cut method of producing type, as was long supposed, but it is certain that he used metal type. He also developed more suitable oil-based inks, also a long-standing standard in the industry.

Gutenberg did not profit from his invention, however: his financial backers sued and won control of his workshop (I am always fascinated about how much of the documentation from medieval days is from court cases!) In later life he was awarded a small pension by the Archbishop of Mainz. It was not until many years after his death that the impact of his invention was truly recognized, and his gravesite was forgotten and lost. Today there are many monuments and statues commemorating Gutenberg and his contribution to the advance of human society.

The most enduring monument to Gutenberg must, of course, be the printed book. It is fitting that Gutenberg’s first great project was to print The Bible. A very widely quoted fact is that The Bible is the best-seller of all time, which it is, hands-down. Less well known is that it remains the best-seller year after year even today. Last year about 25 million Bibles were sold in the United States, in a huge variety of different editions and translations. (By comparison, the latest Harry Potter book has sold some 10 million copies.) Annual expenditure on Bibles in the United States is estimated at more than half a billion dollars. Spreading The Good News is good business! Forty-seven percent of Americans claim to read The Bible daily, and ninety-one percent of U.S. households have at least one copy on hand, with the astonishing average of four copies for each household! Gutenberg would be pleased with the trend he started, I would guess. But as it was for Gutenberg, Bible publishing remains an often unprofitable enterprise, for the production is expensive and the retail margins are low. Gutenberg might take some small satisfaction in that fact as well.

Jamie Rawson
Flower Mound, Texas

You don’t have to burn books to destroy a culture. Just get people to stop reading them. — Ray Bradbury

Happy Birthday, George!

It was on this day in 1732 that the first President and Chief Executive of The United States of America, George Washington, was born. Often referred to by the ancient Roman honorific title “the Father of his Country,” George Washington actually deserves the honor. As “Light Horse” Harry Lee wrote at Washington’s death, he was “First in war, first in peace, first in the hearts of his countrymen.”

In the late 1960s and early 1970s, it became the fashion to call into question any received wisdom about historical figures in American History, and the more revered a figure, the more downgraded it seemed they had to become. Now, I am all for a healthy skepticism when it comes to history – it is, after all, as Napoleon observed, usually written by the winners – yet I do balk when the known facts are ignored. My 11th grade U.S. History teacher asserted that Washington was a bad general, a poor politician, and a wanna-be king. These “facts” were, of course, at least incorrect, and perhaps even outright lies. (Tom Maeder was possibly trying to spur people to read on their own and form their own opinions; equally likely he just delighted in shattering icons, and didn’t let a few facts get in his way.)

As a general, Washington was no Napoleon nor Alexander; he did not have to be. He won a few pivotal battles, but it is true he lost about as many. Yet war is not a tennis tournament: it really does matter which battles you win. You can lose many minor frays if you win the important ones. Then, too, Washington’s main brilliance was in devising a very modern understanding of the realities of warfare: War is damnably expensive, and Washington shrewdly calculated that his greatest chance for success was to keep several British armies in the field for as long as possible while denying them a chance at a conclusive victory. Knowing that the fledgling United States could neither outgun nor outspend the British Empire, he nevertheless knew he could make the war more expensive than the Empire could tolerate. Washington had studied his Roman history quite thoroughly, and modeled his strategy upon Quintus Fabius Cunctator’s successful war of attrition against Hannibal. And, as we know now, Washington’s “Fabian Strategy” worked.

As a politician, George Washington ably presided over the Constitutional Convention, managing to guide a contentious and divided gathering of representatives of the several states to a successful conclusion of their mission. No other citizen of the United States commanded the necessary respect and personal affection to undertake so politically risky a post; without his leadership and the legitimacy it imposed upon the proceedings, the entire enterprise might have failed aborning.

As the first President under the Constitution, Washington established many precedents which – far from being Royal in character – showed that Washington was sincerely committed to the republic and the democratic, republican ideals upon which the nation had been founded. It was Washington, for example, who decided that The President of the United States of America would be addressed simply as “Mr. President.” Many have noted that Washington was frequently addressed as “His Excellency” during his term of office, but the address was not preferred by Washington; it was a holdover from officials and functionaries who had grown up in the British Empire, and who found old habits of titular deference hard to abandon.

Washington also established the profoundly important precedent that a president should serve no more than two terms. It is hardly a kingly ambition to voluntarily step down from power. And it is quite notable that in his will, Washington identified himself plainly as “Citizen of the United States.” Not “Former President,” not “General of the Army,” but “Citizen.” It is hard to find regal pretension in that.

In his own time, Washington was well respected in Europe, and he enjoyed an excellent reputation in Great Britain after the Revolution. In France, Napoleon himself proclaimed a period of national mourning when the news of Washington’s death reached Paris.

Much much more could be said of Washington – and of course hundreds volumes have been written – but it is enough to say that George Washington truly was a towering figure in American history, and remains truly deserving of our genuine respect and gratitude (he’d not have wanted our awe.)

Oh, and one other thing: I started by noting that many of the old facts about George Washington had been called into question in the iconoclastic 1960s and 1970s, and I noted that the facts genuinely were correct. BUT, I am quite wrong when I say the George Washington was born on this day in 1732, for he most definitely was NOT. Despite what we used to celebrate every year before it was subsumed into the rather bland and uninspiring “President’s Day,” February 22 is not George Washington’s Birthday. Old George was born on the 11th of February 1731! True.

George Washington was actually born on February 11, 1731 as reckoned by the Old Style, Julian Calendar which was then still in use in Great Britain and its colonies at the time. The old Julian Calendar, however, was flawed in its imposition of one leap-day every four years, making the average Julian year exactly 365.25 days. The physical solar year is not quite so neatly precise, being 365.2424 days long; the difference seems small, but trivial values add up over time. The Julian scheme made the calendar gain somewhat on the physical solar year. By the 1500s, the calendar was off by a full ten days and Pope Gregory the XIII decreed in early 1582 that a two-fold correction should be made: the short-term fix was to delete ten days from the year 1582. The calendar that year jumped from October 5 to October 15. The other correction was to eliminate 3 leap-years from every 100 so that only century years evenly divisible by 400 would be leap years (thus 1600 and 2000 were leap years, but 1700, 1800, and 1900 were not.) In this scheme, the average year works out to be 365.2425 days, an error that would add up to about a day around the year 10000. However, because the actual solar year is increasing very slightly, astronomers these days periodically add a “leap second” to the calendar, which should keep things aligned satisfactorily.

The adoption of this new calendrical scheme was uneven, but by 1587 the Catholic countries of Europe had put it in place. On continental Europe, other countries soon fell in line, the utility of the new calendar and the desirability of a uniform dating system being obvious. But insular England stubbornly refused to implement the new, Popish calendar, and held on to the old, inaccurate Julian Calendar for another 180 years. The English also calculated the beginning of the legal year as late March rather than January 1st, and so 1731 ran from April to March, thereby including George Washington’s birth.

At length, bowing to the demands of commercial trading interests, Parliament decided to adopt the Gregorian Calendar. Thus the month of September 1752 lost eleven days – the number required for the correction at that point.

George Washington was a 20 year old surveyor when this change took place. Being a punctilious gentleman, George was uncomfortable with celebrating his birthday 11 day early in 1753, so he himself decided that from 1753 forward he would celebrate his birthday on February 22nd. And so it is that George Washington was born on February 11, 1731, but he and we have come to celebrate February 22 as his birthday, and we now reckon his natal year as 1732.

It’s all so simple, eh? 😉

Jamie Rawson
Flower Mound, Texas

Labor to keep alive in your breast that little spark
of celestial fire, called conscience.

— Washington

Executive Order 9066

In the aftermath of Imperial Japan’s surprise attack upon the U.S. Military at Pearl harbor, fear and uncertainty gripped the United States. Americans of Japanese heritage were suspected of divided loyalties or worse. On February 19, 1942, President Franklin Roosevelt issued Executive Order 9066 enabling the Secretary of War to declare “Military Zones” and ultimately authorizing the internment of more than 100,000 Japanese-Americans in “War Relocation Centers.”

The internment of Japanese Americans potentially applied to anyone who had a Japanese great grandparent or closer heritage, so that citizens of Chinese-Japanese heritage were affected; Koreans were affected as well, since Japan had governed Korea for almost 40 years by the time of Pearl harbor. (It is notable that almost none of the more than half-million Chinese-Americans in the US during the war years were affected by this “relocation” effort. China was a U.S. ally in the war with Japan.) Was this internment order inspired by racism and bigotry? In large part, no doubt. Simple greed was a major factor as well, profiteering knowing no limits of race. Many of the Japanese-Americans who were subject to internment had to sell their homes and possession on very short notice and at greatly undervalued prices. As noted, the action was not applied to people simply because they were from East Asia. The Japanese were specifically targeted.

Of the ten internment camps that were established in 1942 and 1943 in support of the relocation effort, Manzanar in California’s Owens Valley is the most well-known. This is due to three factors: it was the first of the “War Relocation Centers” (WRCs) to be opened, in March of 1942; it was the largest, and it has been preserved as a National Historic Park.

One of the aspects of these camps that is worth calling to mind is that while they were truly “Concentration Camps” in the original sense of the word, they were not in the same category as the concentration camps in Nazi-occupied Europe which gave that term such an especially evil and murderous reputation. In fact, the American Jewish Committee at one time strenuously objected to the use of the term “Concentration Camp” to refer to the WRCs, as the WRCs were demonstrably far different from Nazi camps. The WRCs featured stores, schools, hospitals, recreation facilities of sorts, and libraries. These were not comfortable facilities, and no one would wish to be imprisoned even in a palace, but they were not intentionally physically brutalizing.

These camps were Spartan, to say the least. The accommodations were meager and only just barely adequate, being built of boards and tarpaper, and having minimal heating, no cooling. At a place such as Manzanar, in the high desert, the cold could be fierce and the heat intense. The facilities, though, were basically the same as the barracks used by members of the armed forced at the time, and so were not intentionally harsh. There were no “permanent” buildings at these camps. Some structures were placed upon concrete slabs, and a few had masonry foundations, but none were of brick. The guard towers were wood-frame construction in all the camps. Costs were kept low.

I had the moving experience of meeting a former inmate of Manzanar a few years ago in California when I attended a memorial service for Ruth Colburn, the mother of a College classmate. She had served as head librarian at Manzanar from 1942 through 1945. In that capacity, she made many deep and lasting friendships, and she earned the respect and affection of many of the camp’s inmates, as reflected in the fact that one of the “alumni” of Manzanar, a gentleman in his early 80s, remembered that she had been a gracious and kind presence in a difficult and stressful circumstance. He had been a teenager at Manzanar, and had evidently taken full advantage of the library that was provided.

Growing up in California, I had the chance to know a few people of Japanese heritage who had been sent to these “War Relocation Centers.” Frank Kamada, a nurseryman, had been interned at Camp Jerome in Arkansas. Frank recalled such things as being “deloused” by being showered in Malathion insecticide, and having to play baseball with ball of string wrapped in medical adhesive tape copped from first aid kits. Frank also remembered being allowed outside the camp to work for and with local farmers, and teaching the Arkansan farmers techniques of soil conservation and improvement which originated in Japan. Frank spoke of this time without apparent bitterness or regret. “We were at war,” he once observed. Note that use of “we.” Frank was Nisei, a 2nd generation Japanese descendant. He identified as American, as did his parents. But they were all interned.

When my folks owned a flower shop in the early 1970s, we did a great deal of business with vendors in the “Japanese Market.” Los Angeles’ wholesale flower market – in those days second only to Amsterdam’s – occupied two city blocks on either side of Wall Street in downtown LA. On the south side was the American Exchange, on the north was the Japanese Market. Many of the folks who owned concessions or worked in the market were Nisei or Sansei (3rd generation) who had been in the camps or had family who had been interned. Sada Miyahara (who anually on 17 March donned vibrant green and wore a green plaid tam on his head, would proclaim, “I’m Irish: Me O’Hara!”) had been at Tule Lake WRC. He recalled his time there frankly, but, again, without any grudge.

I was then and remain to this day struck by how little resentment or bitterness was expressed by these people to whom so great an injustice had been done. No doubt some small number of those imprisoned were loyal to the Emperor, but even these could hardly have constituted a meaningful threat to the U.S. But one must bear in mind that the shock of war was profound and dramatic, and that the suddenness of the surprise attack that brought us into it alarmed people in a way that even the 9/11 terrorist attacks did not. This does not excuse the internments, but it does help to explain them.

I would not aim to lessen the grim and oppressive aspect of these camps. Their very existence is a proof enough of oppression and discrimination. But as my brother observed, “I was appalled then (and am now) to think that US citizens could be treated that way but I was also struck by the complete lack of rancor evident when these folks told about their wartime experiences. I doubt I would be as calm in retelling such a tale. All the same, it seems, in a small way, more humane and civilized to know that those camps were provided with libraries and trained librarians. Somebody was thinking. How nice to know that Ruth Colburn was one of the people who made our internment camps something far less odious than what the Germans and Japanese had to offer the world at the time.”

Executive Order 9066 was officially rescinded by President Gerald Ford on 19 February 1976. Under President Jimmy Carter, a commission was established to fully investigate and evaluate the motive and the impact of Executive Order 9066. Eventually, reparation payments were made to living internees and the Federal Government formally apologized for the internment. While money and conciliatory words cannot truly redress an old wrong, remembering and reflecting may help us to avoid repeating the episode in some future time of fright and trepidation. Executive Order 9066 remains a part of our American past that must not be forgotten.

Jamie Rawson
Flower Mound, Texas

To do injustice is more disgraceful than to suffer it. — Plato

Sook Ching: Defeat And Horror

On 18 February 1942, in the wake of the surrender of the British miltary and subsequent collapse of British authority, the victorious Imperial Japanese forces began the imposition of Sook Ching, or a “cleansing purge” upon the Chinese population of Singapore.

Because the Singaporean Chinese community had given strong financial support to Chiang Kai-shek and the Kuomintang in the war against Japan, (there is a memorial to China’s first president, Sun Yat-sen, in Singapore) the Japanese occupation government determined that this support had to be eliminated forcefully. Accordingly, they began a screening process, rounding up any and all suspected of anti-Japanese sympathies. The number of people who were caught up in this purge are much disputed. The Japanese acknowledge that perhaps 6,000 people were killed; the Chinese community in Singapore believes it was 100,000. Most sources state a figure between 25,000 and 50,000. No matter the numbers, it was a huge devastation upon the Singapore Chinese, and it also resulted in many deaths among other ethnic groups as well, for anyone who had served in the colonial administration was suspect.

In addition, during the occupation, food was rationed in starvation portions. An adult could only purchase about 8 pounds of rice per month, and that at exorbitant prices. An unknown number of people died of malnutrition or starvation during the occupation. It is regarded as the darkest period in Singapore’s history, and many in Singapore consider that the movement for independence was born of the lessons learned during that time.

When the Japanese surrendered in 1945, Singapore was a shell of its former self, and its economy seemed permanently ruined. It would be years before it returned to pre-war levels. But in the 48years since Singapore’s independence, it has been one of “The Asian Tigers” with a vigorous, fast-growing, highly modernized economy, and one of the highest standards of living in Asia. But the war years are not forgotten: there is a striking monument to the victims of the occupation in downtown Singapore, and annual memorials are held on February 18 each year.

Monument Commemorating The Japanese Occupation of Singapore

Monument Commemorating The Japanese Occupation of Singapore

Jamie Rawson
Flower Mound, Texas

To The Shores Of Tripoli!

Another February day, another American warship, another foreign harbor, and another explosion and sinking; what is it about February? It was on this day, 16 February 1804, that the 36-gun frigate USS Philadelphia was destroyed a as she lay at anchor in the harbor of Tripoli, on the Barbary Coast in what is now Libya.

Unlike the later destruction of the battleship Maine, this was an undoubted case of sabotage, for Philadelphia was undoubtedly boarded by specially trained forces whose mission was to completely destroy the vessel. One unusual aspect of these saboteurs is that they were volunteer American sailors led by American naval icon Stephen Decatur, America’s first great military hero in the post-revolutionary era.

Philadelphia was destroyed to prevent the Bashaw (or Pasha) of Tripoli from using her as a part of his pirate fleet. The warship had been captured after running aground in Tripoli harbor the previous October as she patrolled the North African waters, thereby creating America’s first foreign hostage crisis in the in the Islamic world. The crew of Philadelphia had been enslaved after their surrender, and the Bashaw demanded an enormous ransom for their release.

The potentates of the Barbary Coast had for centuries derived an enormous income from what was basically an immense extortion racket. Pirate fleets attacked merchant ships of any nation which would not pay “tribute” to the Barbary gangsters. The powers of Europe had generally acceded to this scheme for years, because it was calculated that the cost of tribute was cheaper than the expense of a military solution to the problem. The United States, perhaps with the vigor or brashness of youth, rallied around the sentiment “Millions for defense but not one cent for tribute!” and sent armed vessels to the Barbary Coast to protect American merchantmen.

On patrol in the Fall of 1803 under the command of Captain William Bainbridge, who has the unenviable distinction of being the first U.S. Naval officer to surrender a ship to the enemy, (during the undeclared Quasi-war with revolutionary France in 1798) Philadelphia ran aground on an uncharted sandbar outside of Tripoli harbor. The ship’s crew labored for hours to free the stranded vessel, finally jettisoning almost every object which could be moved in hopes of lightening the ship sufficiently to float it off of the bar. Unwisely, the ship’s cannon were finally tossed as well. During this activity, boats from Tripoli harbor began to sail around Philadelphia and men aboard began firing muskets and small guns at the helpless frigate.

By great good fortune or because of spectacular inaccuracy on the part of the assailants, no one aboard Philadelphia was wounded. Bainbridge, however, finally decided that the ship was indefensible, and signaled his willingness to parlay for surrender. Though the Bashaw promised to treat his captive according to military conventions of the day, the men were nevertheless immediately enslaved. The story of the crew’s plight and of the incredible scheme which was launched to overthrow the Bashaw, is wonderfully told in Richard’s Zacks’ 2005 book, The Pirate Coast, The truly painful irony of the ship’s surrender is that had the captain and crew borne up another two hours, the rising tide would have floated it free!

As negotiations attempting to secure the release of the enslaved sailor dragged on, America’s military leaders feared that the Bashaw might sell the frigate to one of the European naval powers. The design and features of Philadelphia represented the best in American naval technology, and there would be serious consequences if she fell into hostile hands. Accordingly, president Thomas Jefferson authorized an all volunteer force to attempt the destruction of the ship. In the aftermath of Decatur’s successful exploit, British admiral Horatio Nelson, a man who knew a thing or two about valor and duty, declared the destruction of Philadelphia to be “the most bold and daring act of the age.”

America had begun to show Europe’s powers that it was possible to stand up to the extortion of the Barbary states’ corrupt rulers. At the same time, America began a long history of strained relations with Islamic states.

Jamie Rawson Flower Mound, Texas

It is not because things are difficult that we do not dare;
it is because we do not dare that things are difficult.

— Seneca


The Pirate Coast: Thomas Jefferson, The First Marines, and the Secret Mission of 1805, Richard Zacks; Hyperion, 2005: ISBN: 1401300030

Zacks tells the story of the capture of the United States Navy’s Philadelphia in Tripoli and the subsequent enslavement of its crew and officers. The ensuing diplomatic crisis between the U.S. and an Islamic state could have come from today’s headlines. In the first chapter of the book, Zacks recounts the raids of Islamic slavers on Europes Mediterranean coasts, and he later describes the lot of the American sailors who were enslaved in Tripoli. This is an excellent account of a little known episode in American history.

Remember The Maine!

It was on Tuesday 15 February 1898 that the United States Battleship Maine violently exploded in Havana Harbor with a loss of 260 American sailors. In the aftermath of the tragedy The United States went to war with Spain and gained for itself a far-flung overseas empire. The war also firmly established The United States as a global power.

In early 1898, Spain still ruled Cuba and Puerto Rico, her last two colonial holdings in the New World, and Spain’s rule was brutally oppressive. Cuba had experienced several rebellions in the last half of the 19th Century, and Spain struggled to suppress these revolts with harsher and more violent measures. In the United States there was a great deal of popular sentiment in support of the Cuban rebels, and there was also a strong business interest heavily invested in Cuba. As Spain tried to end the latest rebellion, the United States determined to send a major warship to monitor developments in the rebellion and to protect American citizens and American interests.

The battleship Maine was anchored in Havana harbor with the express permission of the Spanish government. The Spanish government was outraged at the U.S. request, but realized that it would be necessary to comply in order to keep the U.S. from directly intervening in the revolt.

The precise causes of the explosion and sinking of the battleship are still hotly debated. The initial investigation launched by the U.S. Navy immediately after the event concluded that a mine or shell had struck the ship and ignited the much larger and devastating secondary explosion in the powder magazine. The Navy’s report never proposed to identify the culprits responsible for the mine or shell, but it was widely believed in the U.S. that the Spanish government was behind the attack. Many ardent nationalists in Spain had proclaimed Maine’s visit an unbearable insult to Spanish dignity, and a nationalist news paper had even suggested that Spain should blow up the offending battleship.

The Navy’s initial report was based on the findings of two deepsea divers whose primary mission was the recovery of bodies. The water of Havana harbor was murky and difficult to work in, and the much of the information reported by the divers was based upon feeling their way around the wreckage. In 1911 the Navy conducted a thorough and detailed investigation. A coffer dam was constructed around the wreck and water was pumped out so that the ruined ship could be properly examined. This second investigation concluded that a mine had been detonated directly underneath the powder magazine. At the conclusion of this investigation, the wreckage, which had presented a navigational hazard, was floated out of Havana harbor and resunk about four miles off shore.

In 1976, Admiral Hyman Rickover – famed as the “Father of the Nuclear Navy” – headed yet another formal investigation into the cause of the sinking. This time the actual physical evidence was re-examined and subjected to modern technological assessments. In addition, computer models were employed to study each of several proposed possible scenarios. Rickover’s report concluded that an internal fire in a forward coal bunker had ignited the magazine, and that no external explosive device had been involved.

Most recently, in 2001, divers visited the actual wreckage once more. Based upon current photographs of the wreckage and some metallurgical analysis, the most recent conclusion is that there *was* indeed an external explosion, most likely caused by a small mine.

In the end, none of these investigations would have mattered at all. The popular press in America of 1898 – most notably William Randolph Hearst’s immense newspaper empire – whipped up public resentment against Spain. Many powerful business interests looked forward to freeing Cuba from Spain and bringing it into the United States one way or another. U.S. designs upon Cuba were first expressed by Alexander Hamilton during the revolutionary war, and were commonplace topics throughout the whole 19th Century. The administration of Franklin Pierce was greatly embarrassed by the 1854 “Ostend Manifesto,” written by key American diplomats in Europe, which asserted that the U.S. had a right to buy Cuba from Spain, or to go to war if Spain would not sell. The fallout from that episode effectively prevented the U.S. from directly involving itself in Cuban political affairs for almost a generation. But by the 1890s, the internal upheavals within Cuba and Spain, and a growing sense of the need for American Expansion revived the notion of making Cuba a part of the United States. Thus a pretext for the use of armed force was widely welcomed across America. By April of 1898, Congress made impossible demands of Spain and authorized President McKinley to send troops to Cuba. Finally, on 25 April Congress passed a declaration of war with Spain as of 20 April, though only after the “Teller Amendement” had been passed which guaranteed that the war would free Cuba and not make it a part of the U.S. (Other Spanish possessions in the Caribbean and the Pacific, Puerto Rico, The Philippines, and Guam, were not covered by such a clause; Guam and Puerto Rico remain U.S. territory.)

In hindsight, the war was obviously an easy win for the United States. Admiral Dewey completely destroyed Spain’s Pacific Fleet in Manila Bay, without the loss of a single American life (one elderly engineer may have died of a heart attack, but there were no battle casualties.) At Santiago, Cuba, Spain’s Atlantic Fleet was similarly devastated. American troops formed up in Tampa, Florida and made landing in Cuba near Guantanomo Bay. Future president Theodore Roosevelt led a wild charge of “The Roughriders” up San Juan Hill, and Spanish resistance was quickly overcome.

But in April of 1898, an easy American victory was by no means certain. The night before Dewey’s fleet sailed from Hong Kong Harbor on 27 April, the Royal Navy hosted a farewell party for the American officers. One British captain noted in his log that he expected that it was the last time he would see any of the Americans alive, as they had the impossible task of sailing into Manila Bay under the guns of “The Gibraltar of The East,” Corregidor. Spain’s navy was not quite so modern as America’s, but it could be expected to fight fiercely. In the end, the Spanish Navy underestimated the capabilities of the U.S. Navy. Dewey evaded Corregidor by sailing into the bay by night, relying on expert navigation and 200 year old charts; he was therefore able to catch Spain’s Pacific Fleet untirely unawares.

When the fighting stopped in August, Spain had no navy to speak of, and no way to control her wide-spread empire. Peace terms forced Spain to cede The Philippines, Guam, and Puerto Rico to The United States. The war also forced Europe’s Great Powers to re-evaluate the United States and to seriously consider its role in global power politics. A new Great Power had joined the club.

The lasting legacies of this war are many – possibly even including the lengthy dictatorship of Fidel Castro – but one of my favorites is just a wee footnote: the first American troops landed on Cuba at a small town between Guantanamo Bay and Santiago; in the wake of the successful landing, a cocktail of rum and lime juice became quite popular among military officers and later with the American public. The cocktail was named in honor of the landing. That small town’s name? Daiquiri!

Jamie Rawson
Flower Mound, Texas

“Don’t cheer boys! Those poor devils are dying.”

– Order of Captain John Phillip to U.S. Sailors
at the Battle of Santiago de Cuba

Happy Valentine’s Day!

For this was on Saint Valentine’s Day
When every bird cometh there to choose his mate.

— Geoffery Chaucer,
The Parliament of Fowls, 1382

Two saints are connected with February 14. One was Bishop of Terni, (d. 269); the other was a priest or physician in Rome, who was martyred during the reign of the Roman Emperor Marcus Aurelius (ca. 167.)

Neither saint seems responsible for the traditional rites and rituals of Valentine’s Day. Various “legends,” such as that of an imprisoned Valentine sending his true love a note (signed: “Your Valentine”) before his martyrdom, are evidently rather recent fabrications, no older than the late 18th century.

Medieval calendrists calculated that 14 February corresponded to the date which the Roman naturalist Pliny claimed that birds went forth in search of a mate. Seemingly, birds pairing up occasioned similar activities among people. These Medieval romanticists celebrating a day of love were the same people, interestingly enough, who imagined that it was nobler to suffer years of chaste, unrequited love, than any amount of actual carnal connectings. But on the other hand, stand the ribald, bawdy songs of the Medieval Carmina Burana. One never knows what to make of the medieval folk …

The traditional Roman fertility feast of Lupercalia was celebrated in mid-February. Lupercalia featured games and races, and included throngs of athletic youths running naked through the streets of Rome, lashing at the objects of their attention with thongs. For some reason, this was thought to promote fertility. Lupercale was also a time for lovers to announce their engagement. The possible connection with the Lupercale seems to me to be the best reason that this day is so long associated with Romantic Love, though many authorities question this. There is really no documentary evidence available, and Chaucer’s assertion quoted above appears to be the first written association of the Day with couplings of any sort.

By the high Medieval period, written expressions of love were an integral part of Saint Valentine’s day, and the exchange of gifts was well established. The earliest surviving example of a “Valentine” is a letter dated 1416 written by the Duc D’Orleans to his wife when he was imprisoned in the Tower of London following the Battle of Agincourt.

By the early 1800s, note cards especially printed with love poems and pretty images became popular, helping to give rise to the Greeting Card Industry of today. In the middle Nineteenth Century, sarcastic and even insulting cards became a fad.

Flowers and sweets were certainly associated with the Lupercale, and there is possible reason to conclude that these traditions of our modern Valentine’s Day celebration are direct descendants of the Roman customs.

The iconic image of the cockled (“heart shaped”) heart seems to stem from antiquity; some trace it to Aristotle’s anatomical descriptions, but the design certainly is far older than Hellenic Greece. It is true that the “heart” figure barely resembles the actual human organ, though the two upper lobes have a plausible correspondence to the right and left atria, yet it seems a rather long stretch. It is really rather tough to explain the imagery of the “heart.” Anthropologist Desmond Morris even speculated that it represents a simplified rendering of the female form posed in sexual receptivity. In any case, it is an ancient figure.

In modern times. Saint Valentine’s Day is a commercial bonanza: it is typically the highest single-day sales volume for florists, and it is the number two greeting card holiday, second only to Christmas. Candy sales have fallen off somewhat in recent years, perhaps due to a trend for healthier lifestyles, yet they still remain huge.

No matter its origins, Valentine’s Day remains an enduring favorite. Several weeks after the Christmas and New Year’s holidays, yet still in the grip of cold, grey, and often depressing winter, people are undoubtedly ready for some good excuse to have a celebration, and why not celebrate Love amid the dreary days?

Happy Valentine’s Day!

Jamie Rawson
Flower Mound, Texas

Love is not a crime;
if it were a crime to love,
God would not have bound even the divine with love.

— Carmina Burana

Witch Hunts Of Yesteryear …

Two closely related events on two close-by days:

It was on 8 February 1692, in the small village of Salem, (now Danvers) Massachusetts, that Abigail Williams and Betty Parris were declared by a competent physician to be under the spell of witchcraft, thereby launching the most famous – really infamous – “Witch Hunt” in History. In 1949, Marion Starkey published her powerful and influential work The Devil In Massachusetts: A Modern Enquiry Into The Salem Witch Trials. Though the work has been criticized as offering no new research findings about the historical events, Starkey’s book is notable in applying 20th Century psychological interpretation to the events of 1692.

The Devil In Massachusetts was a great influence on Arthur Miller’s 1953 masterpiece, The Crucible, which is usually interpreted as a thinly-veiled expose of Senator Joseph McCarthy’s then-current “Witch Hunt” for Communists in the United States government. I find it most interesting that Starkey, in the introduction to “The Devil,” makes a strong case for studying the Witch Trials as a means of helping to forfend against a modern day repetition. Starkey notes that the upheavals and wars of the 1930s and 1940s had produced many a Witch Hunt, and she expresses her hope that by 1949 the world will have grown more mature, and somewhat wiser.

How terribly ironic it is then, that less than a year after the publication of The Devil In Massachusetts, on 9 February 1950 that Senator Joseph “Tailgunner Joe” McCarthy made his first public assertion that the U.S. State Department was completely infiltrated by known Communists. During the next four years, McCarthy would repeat this charge over and over, changing the numbers, and eventually changing his focus from the State Department to the Army, yet managing to retain a high profile in the public eye despite his inability to identify a single “Fellow Traveler” in his investigations.

Speaking before the Wheeling Women’s Republican Club in West Virginia on 9 February 1950, McCarthy made the shocking claim that the State Department was then employing “204 known Communists.” At first there was little media attention paid to this speech; the only first-hand account of it appeared in the local Wheeling paper. But McCarthy was asked about his claim a few days later back in Washington, D.C. and he asserted that there were “at least 81 Known Communists” in the State Department. In the next few weeks, the stated number fluctuated between a low of 10 and a high of 205.

It seems inexplicable in light of what we know today, but the mainstream media did not call McCarthy to account for his wildly varied numbers; the media did not then even request a jot of proof to support the claims. The unsupported and unsubstantiated assertions were repeated as if factual, and many came to believe them. Indeed, the question became not, “Are there Communists in the State Department?” but “How many Communists are there in the state Department?” As my older sister told me many years ago, this was precisely what McCarthy had intended. The “Big Lie,” as some political savant once observed, is more easily sold than the small one. It would seem the absurd and ever-morphing lie is even more readily saleable.

Hundreds of careers were destroyed in the course of nearly four years of fruitless investigations. Hundreds of lives were upended and shattered by the baseless and invidious claims made by McCarthy and his staff. His key lieutenant Roy Cohn made an especial point of exposing homosexuals – generally believed to be especial security risks whatever their politics – as a part of the investigations. Ironically, after Cohn’s death from AIDS in 1986, it became public knowledge that he himself was homosexual.

So fervid was the McCarthy team in its unsubstantiatable attempts to locate subversives within the U.S. government, and so pointless was their probing, that the term “McCarthyism” has become synonymous with “Witch Hunt” to mean a frenzied search for a non-existent threat. Nevertheless, their tactics had a chilling effect on political debate in America for years.

McCarthy’s political downfall came in 1954 after ABC television, alone among U.S. networks, decided to televise McCarthy’s hearings into the affairs of the United States Army. Though the audience share of the broadcast of the hearings was rather small, the impact from those who watched McCarthy and his hatchetmen at work was great. A watershed moment was when the Army’s attorney, Joseph Welch, turned to McCarthy and demanded: “Have you no sense of decency, Senator?!?!! At long last, have you left no sense of decency???!??!” McCarthy was revealed to be a bullying grandstander without substance to his claims. The 2005 film, Good Night and Good Luck focusses on these events.

Later in 1954 the Senate voted to condemn McCarthy for his vicious tactics. President and Mrs. Eisenhower forbade him to attend White House functions. It is hard to appreciate the impact of this seemingly social ban, but it signaled the end of McCarthy’s influence. McCarthy’s political heyday had passed decisively, though the ruined lives and careers he left behind took decades to rebuild, and many never could recover.

Yet, whatever the current assessment of McCarthy’s tactics and impact may be, it must be acknowledged that he acted from a very real and pure motive: McCarthy really, really, REALLY wanted to be politically powerful, and to have his views dominate the political process. From that, it can be seen, as is always the case in politics, any excess is excusable. Or at least explicable. And it is ever thus. It is ever thus …

Jamie Rawson
Flower Mound, Texas

I cannot and will not cut my conscience to fit this year’s fashions.

— Lillian Hellman,
to the House Committee on un-American activities, 1952

Groundhog Day 2012

The much-celebrated and ballyhooed Punxsutawney Phil saw his shadow this morning thereby foretelling six more weeks of hard winter weather. Well, here in North Texas the winter has been astonishingly mild, days averaging the mid-seventies. But for those who set store by the groundhog-as-weatherman, Phil’s shadow is most unwelcome.

So why is today identified as Groundhog Day? What’s that all about anyway? And what about the fact the February 2nd was known as Candlemas in olden times? As with so many apparently simple things, the explanation is a bit involved and convoluted.

An old English rhyme states:

If Candlemas be fair and bright,
Winter has another flight.
If Candlemas bring clouds and rain,
Winter will not come again.

In the Liturgical Calendar of the Church, the second of February was designated “the Feast of the Presentation of the Lord.” The Mosaic Laws of the Old testament decreed that a woman was “unclean” for seven days following the birth of a male child, and that she must stay away from the Temple for a further 33 days, making the period for ritual cleansing a total of forty days. (The period was twice that long if the baby was a female! Gender equality was decidely NOT an Old Testament concept.) At the end of this term, the mother would return to the Temple to make a sacrifice concluding her purification, and to present her child to the Temple community.

From the very earliest days of the Christian Church, the feast of The Presentation (or Purification) was an important event on the Liturgical Calendar. By the end of the Fourth Century A.D., the date of Christ’s birth had been set as December 25; calculating forty days from that date produced February second as the date for The Presentation. So far, so good?

Because this Feast celebrated the entry of The Christ – The Light of the World – into the Temple and the greater world, it became traditional by the Eleventh Century to bless the candles that were to be used in the upcoming year. Originally, just the official church candles were blessed, but eventually household candles were included as well. The Priest would bless all the candles presented, intoning the words: “Lumen ad revelationem gentium et gloriam plebis tuae Israel” (A Light to reveal You to the Gentiles, and the glory of your people Israel.”)

Possibly because the theme of this Feast was the entry of The Light into the world, or possibly because of older pagan traditions about mid-Winter – opinions are highly diverse about this – there was a tradition in lower Germany that if a certain animal should see its shadow on Candlemas day, it presaged six weeks of severe Winter weather. Again, details are hard to come by, but some sources specify that the animal in question should be a badger, others state that it should be a hedgehog. (Both of these animals “hibernate” in the Winter and very often are not awake to be looking around at shadows in early February.)

When William Penn invited German immigrants from the Pfaltz to settle in his new colony, Pennsylvania, the new arrivals carried their homeland traditions with them, including the notion that February second was an important day for weather forecasting. Hedgehogs not being found in the New World, and Badgers still sleeping away their Winter typically, a local substitute was found: the humble woodchuck or groundhog (Marmota monax.)

The first extant mention of Groundhog Day can be read at the Pennsylvania Dutch Folklore Center. It says: “Last Tuesday, the 2nd, (February 1841) was Candlemas day, the day on which, according to the Germans, the Groundhog peeps out of his winter quarters and if he sees his shadow he pops back for another six weeks nap, but if the day be cloudy he remains out, as the weather is to be moderate.”

In any case, the tradition was a highly localized phenomenon, and not widely observed in Colonial America, nor during the years leading up to the Civil War. But, as so often happens, a good promotional campaign took a local tradition and turned into a national event. A couple of Pennsylvania newspaper publishers decided to make a genuine event of “Groundhog Day,” and Pennsylvania’s first formal celebration of Groundhog Day began on February second, 1886 with a proclamation in The Punxsutawney Spirit by the newspaper’s editor, Clymer Freas.

Because of the novelty of the celebration and its homegrown character, early telegraph news services spread the story as human interest “filler” for their subscribers. By the late 1890s, Groundhog day was known across the United States. In the days when radio was the major mass medium, Groundhog Day was duly reported, but it took the advent of television to make a real national spectacle of the occasion. For Groundhog Day 2001, an estimated 35,000 people gathered in Punxsutawney, Pennsylvania just to see what the Groundhog would see.

And, just FYI: no meaningful correlation has ever been made between the Groundhog’s prognosis and the actual weather subsequently recorded! But, I must ask: what else would one expect from a giant rodent?

Jamie Rawson
Flower Mound, Texas

“If winter comes, can spring be far behind?” — Percy Bysshe Shelly