Various History

All Was Not Feigned
By Mathew Lyons | Published in History Today Volume: 64 Issue: 7 2014

The struggle between certainty and doubt is at the heart of history, says Mathew Lyons. It should be relished for what it reveals about a past where facts are sometimes in short supply.


In May Brighton College, an independent fee-paying school, announced its intention to make the study of history compulsory for all pupils through to 18. Whatever one’s view of the decision, the fact that it was considered unusual and innovative enough to make the national newspapers should give us – and anyone interested in the practice and pleasures of history – pause for thought.

Should it not be obvious why the past is worth studying all the way through school? And, if it is not obvious, do we make the case for our subject’s virtues with sufficient force? What, indeed, are its virtues?

For me history isn’t really about the past. It is about how we engage with the past, which isn’t quite the same thing. That is what makes it such an excellent educational tool: to read history is to be constantly aware of the struggle between certainty and doubt. Indeed ‘bad’ history – poor research, weak methodology, clumsy arguments and so on – can be just as instructive and illuminating as its counterpart, precisely because it draws attention to the processes and techniques that all historians use.

All history is selective. But where, then, is its truth? One way to answer that question is to consider the areas in which history is most unlike itself, the margins of the discipline where it clearly shades into other traditions of thought, where facts are at best unstable and often largely absent.

I am thinking in particular about the way in which English history in the early modern period was in the process of awkwardly coming to terms with how earlier writers, most notoriously Geoffrey of Monmouth (1100-55), had filled their narratives with fables. It has always struck me as fascinating how the potent nation-forging narrative of Holinshed’s Chronicles could be acute enough to encourage vigorous censorship from the Privy Council and yet capacious enough in its understanding of history to include Monmouth’s pagan English kings and their descent from the Trojan prince Aeneas, through Brutus, the legendary founder of Britain.

Lear is the most famous of these, but the story of his father, which appears to have been Monmouth’s invention, is altogether more fabulous.

His name was Baldud. He began his reign in the 385th year of the world. Practised in the arts of astronomy and necromancy, he used his skills in the latter to establish both the hot springs and the settlement of what became the city of Bath. Such was his intellectual ambition and self-belief that he fashioned himself a pair of wings and leapt to his death from the tower of the temple of Apollo in the city of Troynovent, as London was styled in the Brutish mythos.

Other chroniclers embellished the tale further: that Baldud spent many years studying in Athens, bringing back a number of learned men to create a university at Stamford, for example.

It is easy to mock these stories, but are they so different to today’s fashionable counterfactuals? And what do such mythical tales, and the fact that they were once found nestling comfortably in history’s arms, tell us about what history is for?

In fact it is not difficult to discern a range of still active contemporary approaches, from the almost homiletic lesson-learning, to profound questions about identity and ancestry that our subject still inspires. And then there is the mesmerising clarity of narrative itself, the desire to order and make sense of human life which history, a fact-based discipline that requires the insights of art to flourish, is perfectly placed to do. History can seduce us, even, perhaps particularly, those narratives that, in Milton’s evocative phrase, can be ‘exploded for fictions’.Yet history also equips us with the tools to undeceive ourselves.

It is worth quoting Milton’s own justification for including mythical and quasi-mythical narratives in his history of Britain:

Ofttimes relations heretofore accounted fabulous have been after found to contain in them many footsteps and reliques of something true … All was not feigned.

Footsteps and relics: the relics of the past are countless in number. But, to paraphrase Dryden on Ben Jonson’s use of the Classics, when we try to make sense of them, we find our own footprints everywhere in their snow.

Mathew Lyons is author of The Favourite: Ralegh and His Queen(Constable & Robinson, 2011).
Spanning Centuries: London Bridge
By Leo Hollis | Published in History Today Volume: 59 Issue: 7 2009

Until 1729, London Bridge was the capital’s only crossing over the Thames and a microcosm of the city it served, lined with houses and shops on either side.


Every city has its foundation stones, often wrapped up in myth, conjecture and odd truths: Rome was created out of the walls built by Romulus on the Palatine Hill; Paris; Paris was born of an artificial island midstream of the Seine; London emerged out of a river crossing.

In AD43 soldiers of the army of Emperor Claudius chased their routed enemy up the river following the gruesome Battle of the Medway in Kent. According to the Roman historian Cassius Dio, as the natives crossed the river, darting between the shingle islets and mud banks, the legionnaires continued their pursuit:

German units swam across, and others crossed a little higher upstream by a bridge. They attacked the British on all sides and cut off many of them; but rash pursuit led them into trackless marshes, where many were lost.

The Roman soldiers held back and set up a temporary camp, waiting for the rest of the troops, elephants and war engines. They would soon cross the river on a temporary pontoon of boats and continue the hunt on the north side. London was born of this crossing, as merchants who followed the soldiers settled on the Thames s northern hank. Even before the settlement was given a name, it had a bridge.

This year sees the 800th anniversary of the building of the stone bridge of London, completed in 1209. It is this bridge - of nursery rhyme and popular image - that spanned the Thames for over 500 years. Of all the major anniversaries that fall this year, it is the one most likely to be forgotten, but is among the most important. For without London Bridge there would be no London.

The bridge was a place of movement and transfer, of exchange and barter, where the sacred and the profane lived as neighbours, the power of the crown, the City elders and the church all jostling upon the narrow crossing. If ever one needed an image with which to comprehend the anxious diversity of London, one need look no further than its most iconic bridge.

By the 12th century, London was the undisputed capital of England and its major trading centre. In the words of the cleric William Fitzstephen writing in 1173, 'to this city from every nation under heaven merchants delight to bring their trade by sea'. Masts thronged the Pool of London to the east of the wooden bridge. By this time the importance of the structure had been firmly established. There were regular collections and tolls for its upkeep; land had been donated to provide revenue for repairs; a guild had been set up for its civic maintenance. There was even an attempt to use the income from religious indulgences to pay for repairs but clearly the fear of purgatory was not enough to provide adequate funds so a further tax on wool was levied. In 1163 a programme of improvements was overseen by Peter de Colechurch, the first recorded holder of the title of Warden of the Brethren of the Bridge. In 1176 he oversaw the construction of the first stone pier for what would, over the next 33 years, become London Bridge. De Colechurch would not live to see his commission completed.

The new stone bridge was 900 ft (270 metres) long, set upon 20 piers, or 'starlings', that spanned the river from the gravelly banks of Southwark to the northern shore. It is probable that each pier was built in turn and that they were not evenly spaced, possibly because of the uncertain lie of the river bed. The process of erecting the structures must have seemed miraculous as each rose out of the tidal waters. Archaeologists have confirmed that large elm timbers were first driven into the mud to create three layered barriers that diverted the water, leaving a trench that the masons filled with Kentish rag and rubble, upon which the stone arches would be set. The bridge was the longest in Europe at the time. It was King John who decided that the bridge needed to pay for itself and that houses, shops and chapels should line both sides of the structure. Although there would be other famous inhabited bridges - the Ponte Vecchio in Florence, the Rialto in Venice, the Pont au Change in Paris - London Bridge was the most impressive.

Approaching London from Southwark must have been a daunting, if thrilling, experience. The main roads from the numerous southern ports converged, clogged with carts, cattle and noise; the bridge was the only route into the city. Some of the goods being carried were produce to be sold at London's larder at Borough Market, still operating today, though now as a lure for foodies. The market may once have been on the bridge itself, until 1276 when all foreign suppliers were forced to trade outside the City boundaries where the rules of the guilds and the Lord Mayor did not apply. Cattle continued to be driven over the bridge to the abattoirs inside the walls - at Leadenhall, Poultry and Smithfield.

The first thing one would have seen was the Gatehouse that sat above the second starling, already over the water. By the 15th century this had become a formidable stone building which was crowned by the unsavoury spectacle of the heads of the executed, dipped in tar, and displayed on spikes. A previous incarnation was adorned with the heads of the Scottish outlaw William Wallace and the leader of the Peasants' Revolt of 1381, Jack Cade, welcoming all new arrivals with a warning. They would be joined in subsequent centuries by Henry VIII's unbending minister Sir Thomas More and Thomas Cromwell, who ill-advisedly recommended the less-than-beautiful Anne of Cleves to his king and paid for it with his head. In the 17th century, the aldermen gave permission for the heads to be removed to help in the cure for arsenic among workers at the Mint; it was assumed that drinking from a man's skull could be an antidote. It wasn't. After the Restoration in 1660, the gatehouse became the last resting place for the heads of the unfortunate regicides.

There was a gap after the gatehouse from where the traveller could see, for the first time, across either side to the Thames. Yet the congestion would have made any such tourist excursion almost impossible. The bridge itself was 20 ft wide, and the houses stood out a further 7 ft and even more when above a pier. There was a little less than 1 2 ft for the whole traffic of the city to pass both ways. Not until 1722 did the Lord Mayor try to impose some order upon the traffic, commanding that all should drive on the left - the precedent for all British road laws.

Further along the bridge there was a drawbridge, first mentioned in 1257, that could rise when large ships needed to sail through. This was made of wood and was in constant need of repair. Samuel Pepys who, in his role as Clerk of the Navy Office, rode on a daily basis to the docks on the south bank, once put his foot through the rotten planks and nearly broke his leg.

After the drawbridge there were rows of high houses on either side, turning the cramped bridge into a dark tunnel. According to a lease of 1613, each house consisted of three stories and an attic. The tenement that ran along the western edge at the centre of the crossing contained 12 rooms, two garrets and even a cellar. Ground floors were customarily the shop or workshop, with a large room on the first floor, a kitchen, hall and chamber. The bedrooms were on the second floor and the garret was used for servants or storage.

One house was said to be a house of many windows', when hand-blown glass was a luxury. But the most resplendent house on the bridge was Nonsuch House, built in 1580 to replace the old Drawbridge Gate. It was said to have been constructed in Holland of wood and then transported to the bridge and assembled on site without using a single nail. Despite being an early example of flat-pack living, the house lived up to its name - it was unlike anything else in the world. An elaborate Russian turret stood at each corner of the building, which could be seen from every corner of the city. There were two sundials on the south side.

Making one's way across the whole bridge, just before one reached the northern shore at Thames Street where the warehouses stood alongside the more refined St Magnus's Martyr Church and the Fishmongers Hall belonging to one of the 12 grand liveries in charge of the fisheries of the City and the regulation of Billingsgate Market.

In September 1585, Londoners, the Lord Mayor and aldermen watched in awe as a spurt of water rose up from the base of the bridge, creating an arch said to be even higher than the spire of St Magnus Martyr. The Dutch engineer Peter Motrice was hoping to gain approval to build a water mill under the most northern arch of the bridge that could pump the Thames into the heart of the city. By 1582, a cistern delivered water all the way up Gracechurch Street as far as Bishopsgate.

On the southern side of Nonsuch House a plaque read: 'Time and Tide wait for no Man'. Whoever sailed up to London from the Thames estuary would have been amazed by their first encounter with the bridge, which one French visitor in the 17th century called 'one of the wonders of the world'. Yet what was soon to be the world capital of trade had little time for wonders. And as London began to grow so the bridge's monopoly of the Thames came under threat.

The rapid expansion of London beyond the City walls to the west soon encouraged the promotion of new schemes and bridges to span the river. The Corporation of London fought increasingly doomed battles to maintain its control over the river crossing. It was feared that new bridges allowed traders and travellers to skirt around the city, avoiding the small matter of the bridge toll.

In 1976, in an attempt to keep the bridge competitive, its houses were finally dismantled on both sides to increase the flow of traffic. Yet even this could not satisfy the demands of a metropolis that had expanded hugely since the Great Fire. As London grew it demanded new crossings to service the burgeoning suburbs.

Even before the Fire, in 1664, a bridge at Westminster was proposed, but the Aldermen, along with the watermen and ferry boat owners, condemned the project and the king's support was bought with a loan of £100,000. Yet the needs of the expanding city would not be so easily ignored. Putney Bridge was completed in 1729, far to the west of the City, followed by Richmond Bridge, even further west, in 1777. Waterloo Bridge was completed in 1817.

By the mid-19th century, a full scale reorganisation of the city was in hand and the 1860s saw new bridges built at Blackfriars, Hungerford Market. Westminster and Victoria. The role of London Bridge was dealt a serious blow. De Colechurch's stone structure was eventually replaced in Victorian engineering in 1824-25, following the designs of John Rennie. It is Rennie's bridge that now stands in the Arizona heat at Lake Havesu City beside a Tudor-style shopping mall.

In May 2009, the Mayor of London, Boris Johnson, presented his new dream for a 'living bridge' to span the Thames once more. Johnson has proposed bizarre and unexpected ideas before: in 2008, he discussed the building of a new airport in the middle of the Thames estuary, a plan that will more than likely remain confined to paper. The mayor's proposed new bridge, connecting the South Bank complex with Victoria Embankment, would cost millions, paid for by the shops, flats and chain eateries that would straddle it. As the mayor says, 'the bridge will once again provide a commercial zone ... a bridge that actually has residential and commercial property on it, as the old London Bridge did'.

Could a shopping mall spanning the Thames be Mayor Johnson's long-term bequest to the city? Or is it perhaps foolish to look backwards so literally? In 1600 the engraver John Norden noted that London Bridge was 'comparable in itself to a little citie' and it is worth reminding ourselves that a well-served city is also comparable to a bridge. Perhaps Mayor Johnson should have an image in his mind of the metropolis being just that, a place that brings people together, a place of exchange and movement. I hope that this 800th anniversary will remind us that London Bridge is far more powerful today as a symbol than it ever would be as an actual physical structure.

Leo Hollis is the author of The Phoenix: The Men Who Made Modern London (Phoenix, 2009)
View From A Bridge
By Denise Carr | Published in History Today Volume: 44 Issue: 5 1994

Denise Silvester-Carr plays tribute to Tower Bridge as it celebrates its 100th birthday.


Tower Bridge is 100 years-old in June. The most impressive of all bridges across the Thames, its neo-Gothic appearance has come to symbolise London all over the world. Had various alternative suggestions for the bridge succeeded, the distinctive outline could easily have been a floating roadway suspended on chains or a bridge resplendent with a paddle wheel.

What few people realise is that Tower Bridge is the unfulfilled brainchild of a little-known captain in the Royal Navy. Even as work began on Rennie's new London Bridge in March 1824, it was clear to the captain that a second bridge further down river would be needed if traffic congestion caused by the new docks in Wapping increased. The problem was that a bridge would prevent tall-masted ships entering and unloading in the Upper Pool by the Tower of London.

On December 18th, 1824, what seemed to be a solution appeared in a magazine called The Portfolio. Captain Samuel Brown proposed an elevated roadway from the Minories to Bermondsey. 1,000 yards long, it would run between the east side of the Tower moat and the dock about to be excavated on the site of the medieval hospital of St Katharine. He planned the revolutionary bridge with James Walker, a civil engineer, but basically, it was Brown's idea.

Ten years earlier Brown had invented iron chain cables and more recently built the chain pier at Brighton. The engraving in The Portfolio showed the graceful silhouette of an 80ft high bridge suspended on iron chains between four stone piers. Brown produced figures to indicate that tolls would yield £100 a day, but nothing came of this much admired undertaking and St Katharine's 'Bridge of Suspension' – the future Tower Bridge – was forgotten for almost fifty years.

Fresh impetus came in the 1870s. A million people – a third of the city's population – lived to the east of the Tower, and 'a vast thicket of shipping' lay at anchor in the Pool of London. Pressure for the long-delayed bridge mounted, and in 1876 the Corporation of London asked the Bridge House Estates Committee to look at the possibilities.

The Corporation's oddly- named committee administered the Bridge House Estates Trust which for centuries had collected tolls and rents from the battlemented houses and gabled shops on the elaborately ornamented twelfth-century London Bridge. Endowments left to 'God and the Bridge' by medieval merchants grateful for the business it brought them had accumulated enormously over the years. Since 1769 the committee .had drawn on the Trust to build Blackfriars and Southwark Bridges and to replace London Bridge. Now it invited ideas for a 'Tower Bridge'.

A flood of extraordinary designs poured in. A 'duplex' low-level bridge which divided into two carriageways with swing- bridges would, claimed the designer Frederic Barnett, allow 'uninterrupted continuity of vehicular and general traffic'. Whenever a ship passed through, the up or downstream bridge would swing back and road traffic could be diverted to the alternative bridge. Among other fanciful suggestions was a 'sub-riverian arcade' (a tunnel) with a raised deck above water level and a paddle wheel ferry bridge.

Four designs were put forward by Sir Joseph Bazalgette, the engineer who embanked the Thames, but the headway for tall ships was completely inadequate. Observing that Bazalgette had ignored the 'evidence, views, wishes and interests of the wharfingers and shipowners', Horace Jones, the City's chief architect, responded by producing his own scheme.

Whether Jones had seen Captain Brown's proposal is not known but in 1878 he suggested that chains should be used to raise the road on a crossing designed to resemble s. medieval drawbridge. Twin turrets were deliberately intended to look like the corners of the White Tower at the Tower of London. But the curved steel span would not give sufficient clearance for the bascules (from the French word for a see-saw) to open fully, and Jones temporarily shelved his 'hasty' plan.

Six years later, when a select committee of the House of Commons was discussing the Thames bridges, Jones resurrected the bascule bridge. With the assistance of Sir John Wolfe Barry; the engineer son of the architect of the Houses of Parliament, he submitted a modified scheme. A straight span which would act as a high level walkway was substituted for the arch; hydraulic machinery instead of chains would raise two bascules; lifts would carry passengers up to the walkways and prevent undue delay when ships were passing through. Parliament approved, and construction began in 1886.

Tower Bridge was declared open by the Prince of Wales on June 30th, 1894, and hailed as a triumphant answer to the problem that had exercised earlier engineers. Each bascule weighed 1,000 tons and lay flat in the middle of the road between the towers until ships needed to sail to or from the Upper Pool. Then the hydraulic machines powered by stream-driven engines swung the double- drawbridge into a vertical position giving vessels 200ft clearway. The operation took about six minutes.

In the first year the bridge was raised 6,160 times and about 8,000 horse-drawn vehicles and 60,000 pedestrians crossed it daily. Today, the bascules rise only about 500 times a year. Cars – around 40,000 a day – outstrip pedestrians. Following the decline of the docks in the 1960s and the closure of the up-river warehouses, the Corporation began to look at new uses for the bridge, and saw its future as a tourist attraction. Electric motors replaced the steam engines in 1975 and approval was sought to re- open the high-level walkways which, in spite of the persistent myth that they attracted suicides, had closed in 1909 because they were little used and the haunt of vagrants. The walkways were encased in glass in 1982, and since then the public has been again able to enjoy splendid views of London from 140ft above the Thames.

Last year, in preparation for the centenary, an exhibition of the history of the bridge was introduced using modern technology. In the towers, a dramatic reconstruction of the early years of indecision is shown on videos, and models of the alternative designs are on show. The robot figure of a workman recalls the men who spent eight years constructing the bridge, and machinery and display panels illustrate structural details and statistics.

In a quaint Victorian theatre in a former engine room, an attempt is made to recreate the royal opening in l894 when the Prince and Princess of Wales drove across the bridge, and then watched a flotilla of ships sail between the towers. Cardboard cut-outs bobbing between waves in this toy theatre hardly hint at the grandeur of the spectacular ceremony which was attended by fifteen princes and princesses.

As for Sir Samuel Brown, the originator of Tower Bridge, he has been lost almost without trace. But the present structure, splendidly repainted and floodlit at night, is his memorial.
The Birth and Death of a Dock
By R.B. Oram | Published in History Today Volume: 18 Issue: 8 1968

R.B. Oram recounts an episode in the history of British shipping.


In 1805 the London dock was opened at Wapping, to ships from the Mediterranean, North Africa and the near Continental and coastal ports. It was followed in 1828 by the St. Katharine Dock built on the western side of the London Dock and hard up against the Tower of London. In 1864 the two proprietary companies amalgamated to form the London and St. Katharine Docks.

This complex of entrances, cuttings, quays and warehouses with a total area of 125 acres (water area 45 acres) and four miles of quays (26 berths for ships up to 360 ft. long) is to be sold by the owners, the Port of London Authority. The target set is September 30th, 1968, with all warehoused goods cleared by the end of the year.

These docks, known for a century and a-half throughout the Seven Seas, were built for permanence. Their builders knew nothing of ‘limited obsolescence’—putting up premises whose continued usefulness could be reviewed each decade. They built for the ships of the period whose average size was not, by 1844, more than 241 tons. Inexorably this has risen; by 1903 the average size of ocean-going ships was 1,300 tons, by 1950, 2,700 and by 1963, 3,700 tons.

By 1939 the London and St. Katharine Docks had served well the purpose for which they had been built. Everything that has happened in the world of shipping since 1945, bulk carriers of 200,000 tons, container ships whose cargo consists of units of 25-30 tons (instead of the homely box of oranges or the basket of Spanish onions), has hastened the inevitable decision the Port of London Authority have now taken, to close this dock control. There is sadness that so permanent a part of London, where six generations of labourers and staff have earned a living and where the wonders of an expanding world, of incalculable value, were set before Victorian eyes, is to close down.

Figuratively speaking, at five o’clock on the evening of December 31st, 1968, the Dock Manager will for the last time turn the key of the massive Main Gate and the London and St. Katharine Docks and all they stood for will be no more. It will be the end of an epoch, the end of the small-ship phase in English commerce that began in the wool and wine trade of Henry II, brought about by the intensive struggle, since 1945, to find a larger cargo unit. For three thousand years cargo has been made up in units that could be manhandled. Ships and docks have been built to conform to this limitation. The unit load and the container have swept away in the short space of twenty years the practice of three millennia.

It is interesting that so radical a change has been so concisely marked. Sailing ships lingered commercial for more than a hundred years after the first steamship crossed the Atlantic, just as bows and arrows were used by a Siberian contingent of the Russian Army at the Battle of Leipzig, nearly five hundred years after the first cannon had figured at Crecy. Never again will docks after the pattern of the London and St. Katharine be built in a maritime country.

Success had followed the opening of the West India Docks, as early as 1802, for ships engaged in the lucrative sugar trade from the West Indies.1 A safe harbourage had been provided for the many hundred ships previously laid in tiers on the river buoys, where discharge was slow and they were at the mercy of storms, floating ice and fire.2 Pilferage from the ‘Mudlarks’, the disreputable river-workers of the eighteenth century, rose to a level of thirty per cent of the incoming cargoes—to the great loss of the Revenue, as much of the imports consisted of sugar, rum, brandy and tobacco.

In the new dock ships were assured of a constant level of water, and were not subject to the erosion of their hulls through being stranded twice every twenty-four hours. The idea of an enclosed dock where work could go on, continuously, under strict supervision (river-working was restricted to daylight hours and then only at Legal quays, later extended to ‘sufferance’ wharves), appealed to the commercial community. There was, therefore, little opposition to a Bill for making the London Dock ‘as near as may be to the City of London and the seat of Commerce’; this received the Royal Assent on June 20th, 1800.

London Dock has always been accessible on foot to the City, where commerce has tended to congregate around Tower Hill and Fenchurch and Leadenhall Streets.3 For the first century of its existence horse-drawn wagons conveyed the cargoes stored in its capacious warehouses for the short distances to the town premises of importers. The original design made by D. A. Alexander, the architect of Dartmoor Prison (his works have several features in common), provided only for the Western Dock and an entrance via a large Basin at Wapping, less than a mile by water from London Bridge.

The original construction provided for warehouses to be placed some 60 ft. back from the dock water, with a cart road and narrow sheds, for sorting landed cargo, in between. The importance of tobacco was recognized by the Tobacco warehouse on the East Quay which held 24,000 hogsheads of this highly dutiable commodity. Security hitherto unknown both for ships and cargo, with facilities for storing, sampling and working valuable merchandise, was offered by the London Dock Company when they opened their dock on January 31st, 1805.

The building by Rennie had taken longer than was expected. The excavation of some thirty-five acres and the building of three miles of quays with their contiguous warehouses literally had to be done by hand. Armies of ‘navvies’ (a term invented for the much-sought-after Irish labourers who subsequently ‘navigated’ the railroads across the English countryside) lined up with their shovels, working a spit at a time over the area. The spoil was removed by horse-drawn wagons—the only form of power available.4

The London Dock Company met with little difficulty in acquiring the land at Wapping. Although the Main Entrance to their dock was within five hundred yards of the Tower of London there is record only of a brewing industry in the neighbourhood. Licensed by Henry VII in 1492 it had considerable production in Tudor times, exporting five hundred tuns at a time to Antwerp, presumably for the English Army in Flanders.

It was not until 1827 that the Eastern Dock was built. The connexion with the Thames at Shadwell made a considerable saving to shipping entering the dock. By 1815 steam-driven vessels were seen on the Thames; they were quickly adapted to the towage of sailing ships. The use of steam for cargo vessels does not seem to have been visualized until the 1840’s. The expanding imports of tea that followed the introduction of the tea-plant to the receptive soil of India at this time were housed in the Tea Warehouse which had been built in 1805.

The wool warehouses were opened in 1858 for what was to become a vast traffic in the storage, showing and public sales of imported wool. By the middle of the century the dock was flourishing. Steam had by then arrived as a propulsive power for ships.5

The nineteenth century was an era of dock building; and in 1825 the newly formed St. Katharine Dock Company began demolition and excavation of the area between the western limit of the new London Dock and the eastern side of the Tower Moat. To obtain the use of the site, St. Katharine Hospital (built in 1148 by Matilda of Boulogne, wife of King Stephen), together with 1,250 habitations, were pulled down.

Eleven thousand and three hundred people had to find homes elsewhere; and much opposition, ineffectual as it turned out, was organized, especially as there was no effort made to find accommodation for them elsewhere. There was much sentimental support for the Hospital, for it had escaped the Dissolution of 1534 (Henry VII had confirmed the liberties and the franchises in 1526). It had escaped a direct assault by the Gordon rioters of 1780, who had attacked it as ‘having been built in Popish times’.

Despite the flooding of eight acres of the site by an extra high tide on October 31st, 1827, the eleven acres of water space, the one mile of quay and the vast Bastille of many-storeyed warehouses with their underground vaults, were opened for traffic in 1828.

The technology of cargo-handling was then in its infancy and the proprietors made a daring innovation in the housing of ships’ cargoes. London Dock had constructed narrow quay transit sheds, in which goods could be sorted to marks before transfer to the warehouse. Greatly daring, the St. Katharine Company took the ship’s cargo directly into the upper floors of the adjacent warehouse, thereby considerably speeding the discharge of the ship alongside. Unfortunately, when sorting had to be done, as it too often did, there was delay and double-handling; the experiment was never subsequently repeated.

By the mid-century, the role of this dock was as clearly defined as that of London Dock. Although small ships kept the dock berths occupied for another seventy years, warehousing of goods sent up from the lower docks became the mainstay. In 1900 the dock was handling 1 million tons of shipping, with a revenue of over £1m. and with 15,000 pipes of wine and 10,000 puncheons of nun in its vaults.

Steamships were soon to outstrip sailing vessels in size, the largest sailorman of that period being about 1,500 tons. The increase in ships’ length and draught made demands on docks that London Dock could not meet. The Entrance Lock was a very real bottleneck. The position had been recognized by the building of the Royal Victoria Dock, by a separate company, opened in 1855. Seven miles below London Bridge, it had 28 ft. of water and could take ships up to 450 ft. long, 100 ft. longer than at London Dock.

The latter dock had by this time achieved a considerable trade—which riverside wharves along the Upper and Lower Pool were doing their best to capture—with the Mediterranean, North African and Scandinavian ports. The value of its secure warehouse space was recognized by merchants who entrusted the growing wealth of the five continents to its care. If the dock had to be bypassed by the larger ships now building, its proprietors could still concentrate on the housing of valuable imports.6

While continuing to receive small ships, the London Company built, and opened in 1880, the Royal Albert Dock, a continuation of the Royal Victoria Dock, to the Galleons Reach. This new dock was essentially for transit goods, those for warehousing being lightered to London Dock. To further their policy the London Dock Company, in 1864, had amalgamated with the St. Katharine Dock Company and the Victoria Dock Company.

With the opening of the Royal Albert Dock, whose entrance was eleven miles below London Bridge, the largest ships of the times could be berthed in the eighty-seven acres of deep water. The London and St. Katharine Docks Company had a pre-emptive claim on goods for warehousing. As Victorian standards of living went up, the tonnage and variety of these goods increased.

Meanwhile, the East and West India Dock Company, goaded by the building of the Royal Albert Dock, and also by the growing competition from the Millwall and the Surrey Commercial Docks, built Tilbury Dock, twenty-six miles from London Bridge. It was opened in 1886, but for some time remained empty. By 1889 it had bankrupted the parent company who were constrained to approach the London and St. Katharine Dock Company for a working agreement, which, however humiliating, would keep their new venture afloat until it could be made to pay.7

The new management became known as the London and India Docks Joint Committee. So awkward a compromise could hardly be a success. The London members saw no point in developing the property jointly held, while the India members chafed under the inert regime imposed by the majority. Shipping had advanced little during the last decade of the nineteenth century and demands made on dock accommodation were met without difficulty.

The unhappy partnership was brought to an end with the formation of the London and India Docks Company on January ist, 1901; this owned the London, St. Katharine, East, West and South West India and the Royal Victoria and Albert Docks, leaving only the Millwall and the Surrey Commercial Docks as competitors. They were finally absorbed into the Port of London Authority, an energetic body that began its control over the five dock systems in the port of London on April 1st, 1909.

A large and well-equipped Jetty was built out from the West Quay. Shadwell Basin was developed and many ships’ berths equipped with cranes, the work being substantially completed before 1914. East Smithfield Rail Depot, a few yards outside the Main Gate, provided rail facilities that London Dock had never enjoyed and served as a feeder for exports to the Royal Docks. Only people having business there have ever been allowed into the docks of London. This virtual closing to the public led to much speculation upon the fabulous value of the cargoes handled there; little of this rumour was exaggerated.

Social reform writers were always fond of comparing the wealth locked up in the docks, behind the 20 ft. walls, and particularly at the London and St. Katharine Docks, with the meagre rewards given to the labour and staff who handled these riches. A visitor allowed into these uptown docks could have concluded that amid the gloom of Victorian London the gorgeous East was indeed held in fee.

Carpets, teas and silks from China and Japan, perfumes and essential oils, coffee, spices and drugs, were all to be seen in the transit sheds or in process of being sampled, weighed and piled in the warehouses, ‘places of special security’ as Mayhew correctly describes them. Ivory tusks from Africa had their own warehouse and Show Floor, from whence brokers bought at the periodical Public Sales. Staple imports such as wine and wool provided a rich revenue to the Company and work for hundreds of men and staff.

Twenty acres of underground storage space comprised twenty-eight separate vault systems; it was possible to walk, if you knew the way, from the Main Gate, quite near the Tower, to distant Shadwell, without once surfacing. Prior to the partial destruction of the vaults during the Second War, when both docks suffered badly from the Blitz, there were thirty miles of ‘skids’, the name given to the metalled gangways over which the casks of wine were rolled.

The vast stocks of wool, in a special range of warehouses, were housed, sampled and lotted for Public Sale by the Dock Company and its successors. Buyers from the Bradford area and the North flocked to London for examination of the many lots. The annual importation was of the order of 130,000 bales and there was storage, partly under a north light for viewing the texture of the wool, for 20,000 bales. In adjacent warehouses could be seen stocks of tin ingots and heavy metal bottles of quicksilver, both from Spain, canned goods comprising sardines and Mediterranean fruits, dried fruit from Greece, sulphur and sugar, the latter in large hogsheads that gave way in time to the jute bag.

The scent of fresh fruit in season, oranges from Spain, lemons, grapes and onions, pervaded the Western Dock, while the less attractive smell of hides and skins came from the East Quay. The pungent smell of fresh hops from Hamburg always lingered about the St. Katharine Dock, mixed with that of rubber and wine. Indigo, the handling of which coloured the faces and hands of the workers, was a thriving import until superseded by synthetic dyes.

Tobacco was largely housed in London Dock, giving rise to the ‘Queen’s Pipe’, an installation that always fascinated visitors. A huge furnace, kept permanently burning, it consumed tobacco unfit for home consumption or on which duty had not been paid. It came in useful, also, for combustible cargo such as hams that were judged to be inedible or Italian gloves whose owner had declined to pay the duty. There developed a useful trade in the by-products of the Queen’s Pipe—ashes for fertilizers and manure and the nails from the burnt wooden cases which, having been through the fire, were highly valued by gunsmiths.

What of the labour that worked in all weathers, with few or no amenities and no certain prospect of employment for the majority, even at the 4d. per hour that dock wages sank to? Mayhew has given a generally accurate picture in his London Life and Labour8 of the 3,000 or more men that were employed at London Dock. This number fell to around 500 when unfavourable winds kept sailing ships from coming up the estuary.

In the 1870’s, the pay was 16s. 6d. per week. This had dropped from 24s. a week paid at the opening of the dock in 1805. In 1809 one hundred Preference Labourers were appointed and they owed their place on the list to the favour of a Director. Hours of work were from 6 a.m.-6 p.m. in summer and 7 a.m.-5 p.m. in the winter, with overtime as needed. Men could be engaged for as little as half an hour at a time.

With the slump in trade after Waterloo, the conditions in St. Katharine Dock were inferior to London’s. The establishment of 1830 allowed 225 permanent men and 200 Preference Men. The former received 16s. for a week’s work. The St. Katharine Dock Company made up for the paucity of their payments by the loftiness of the moral standard on which they insisted. ‘Honesty and sobriety were indispensable qualifications, the slightest deviation from them will be attended with immediate and irrevocable dismissal’.

No beer was allowed to be brought into the dock, neither empty cans in which wines or other liquids could be taken out. Henry Mayhew remarked on the deterioration that had overtaken labour standards by the 1860’s, as he watched the bestial and sub-human struggles for work of the cosmopolitan crowd, only a few of whom could earn a few coppers by the end of the day. He would have been distressed to have known that these conditions continued substantially until the Second War.

The staff worked in Dickensian conditions although not ill-paid by the standards of the time. They had permanency and a pension. A large draughty office held upwards of 100 clerks who wrote in copperplate in heavy ledgers; these were kept in racks below the desks during the day and in strong rooms during the night. Two large open fires, one at either end, the only heat available, sent most of this up the chimneys.

By ten o’clock on a winter’s morning the staff were blowing on their hands, or slipping down to the underground dining room where a slice of dripping toast and a mug of coffee could be had for a penny. A good lunch was served here for five-pence, albeit the dining room looked out on to a battery of earth closets that provided the only sanitation for the office block. By the standards of 1914 it found ready acceptance.

The area outside the dock was full of interest. Ratcliff Highway, the Regent Street of the Victorian sailor, provided all that he could need.

Wild animal shops, of which the most famous, Jamrachs, was known throughout the Seven Seas; so were the brothels that lined the side streets. Sailors pitched out into the gutters of St. John’s Hill or Artichoke Hill, off the Highway, and tied round the middle with brown paper and string, were alleged by the older clerks to have been a common sight in the 90’s. The robust night life of this part of London proved an attraction to Victorian high society.

At Wapping was the old Execution Dock where river pirates were hung when caught, and nearby the ale-house from which Judge Jeffreys had been torn by the mob in 1689. A more pleasant memory was of James I who is reputed to have hunted a stag from Wanstead and ran him down in Nightingale Lane, now a grim thoroughfare enclosed by the 20 ft. walls of the London and St. Katharine Docks.

On December 31st, 1968, all this will come to an end. The site will be transformed into a housing estate for the Greater London Council.

1 Duties on sugar imported are said to have largely financed the war against Napoleon.

2 In the great storm of 1703 hundreds of ships in the River Thames were damaged, most being driven ashore. In the Howland Great Wet Dock, opened in 1694 at Rother-hithe, no ships were damaged.

3 Tilbury Dock opened in 1886, 26 miles below London Bridge, was dependent entirely on the London, Tilbury and Southend Railway. No roads were included in its original design; the dock was not effectively road served until 1950.

4 As late as 1900, when the Greenland Dock at the Surrey Commercial Dock in London was extended, the method of loading the spoil into metal skips on a primitive rail was still used, except that small shunting engines were employed in place of horses.

5 The large quantities of coal required for the early and inefficient ships’ engines left little room for cargo, before bunkering stations were installed on the main trade routes. Hence the need for carrying valuable cargoes in what space remained from coal. Hence the competition for Her Majesty’s Mails and the emergence of die Royal Mail & Union Castle Mail Steamship Lines.

6 The value attached to the warehousing of ships’ cargoes is illustrated by the attempts of docks companies to entice ships that would hand over their cargoes (of tea, tobacco, wine, sugar etc.) for housing, by the promise of free entry of the ship into the dock.

7 From being the white elephant of the port for over half a century, Tilbury Dock is now the centre for the new business in containers and packaged timber. It is ironic that the lame duck of the 1880’s has now become the business centre of a modernized port that has no use for the London and St. Katharine Docks.

8 Conditions of labour at the time of the Great Strike of 1889, led by John Burns, were described by the author in an article, ‘The Fight for the Dockers’ Tanner’, in the August 1964 issue of History Today.
Diamonds: A History Sparkling For Centuries

2002 May 08
Correspondent David Kohn

"Diamond" comes from the Greek "adamao": "I tame" or "I subdue." The adjective "adamas" was used to describe the hardest substance known, and eventually became synonymous with diamond.

Knowledge of diamond starts in India, where it was first mined. The word most generally used for diamond in Sanskrit is transliterated as "vajra," "thunderbolt," and "indrayudha," "Indra's weapon." Because Indra is the warrior god from Vedic scriptures, the foundation of Hinduism, the thunderbolt symbol indicates much about the Indian conception of diamond.

Early descriptions of diamond date to the 4th century BC. By then diamond was a valued material. The earliest known reference to diamond is a Sanskrit manuscript by a minister in a northern Indian dynasty. The work is dated from 320-296 BCE.

Small numbers of diamonds began appearing in European regalia and jewelry in the 13th century, set as accent points among pearls in wrought gold. By the 16th century the diamonds became larger and more prominent, in response to the development of diamond faceting, which enhances their brilliance and fire. Diamonds came to dominate small jewels during the 17th century and large ones by the 18th century.

In the 13th Century, Louis IX of France established a law reserving diamonds for the king. This bespeaks the rarity of diamonds and the value conferred on them at that time. Within 100 years diamonds appeared in royal jewelry of both men and women, then among the greater European aristocracy, with the wealthy merchant class showing the occasional diamond by the 17th century.

As more diamonds reached Europe, demand for them increased. The earliest diamond-cutting industry is believed to have been in Venice, a trade capital, starting sometime after 1330. Diamond cutting may have arrived in Paris by the late 14th century. By the late 14th century, the diamond trade route went to Bruges and Paris, and later to Antwerp.

By 1499, the Portuguese navigator Vasco da Gama discovered the sea route to the Orient around the Cape of Good Hope, providing Europeans an end-run around the Arabic impediment to the trade of diamonds coming from India.

In the 18th century the diamond became even more abundant. They were worn principally by women. Substantial quantities of diamonds arrived from South America, making conspicuous display of the gem possible. Diamonds were reserved for evening since it was considered vulgar to parade them by day. Rather than a miscellany of jewels of different types, a matched set of jewelry -- was now worn at all important social events.

Two events near the end of the 19th century helped change the role of diamonds for the next century. First, the discovery in the 1870s of diamond deposits of unprecedented richness in South Africa changed diamond from a rare gem to one potentially available to anyone who could afford it. Second, the French crown jewels, sold in 1887, were consumed by newly wealthy capitalists, particularly in the United States, where a taste and capacity for opulent consumption was burgeoning.

Seen under the blaze of gas and electric lighting, diamond's brilliance showed to greater advantage than colored stones, and so designs incorporated them in far greater numbers than at any time in history.

Before the 1870s diamonds were still rare, and associated with the aristocracy. In 1871, however, world annual production, derived primarily from South Africa, exceeded 1 million carats for the first time. From then on, diamonds would be produced at a prodigious rate.

Simultaneously, the fall of Napoleon III in 1871 left the Third Republic of France with a problematic symbol of monarchy: the crown jewels, largely reset by Empress Eugenie in the style of the great Louis kings. It was decided to auction the bulk, retaining a few key objects for the State.

With French buyers such as Boucheron and Bapst in attendance, Tiffany & Co. of New York bought the major share; 22 lots for $480,000, a sum greater than the combined purchases of the 9 next-largest buyers.

Today diamonds are mined in about 25 countries, on every continent but Europe and Antarctica. However, only a few diamond deposits were known until the 20th century, when scientific understanding and technology extended diamond exploration and mining around the globe. For 1,000 years, starting in roughly the 4th century BC, India was the only source of diamonds.

In 1725, important sources were discovered in Brazil, and in the 1870s major finds in South Africa marked a dramatic increase in the diamond supply. Additional major producers now include several African countries, Siberian Russia, and Australia.

It is a modern misconception that the world's diamonds come primarily from South Africa: diamonds are a worldwide resource. The common characteristic of primary diamond deposits is the ancient terrain that hosts the kimberlite and lamproite pipes that bring diamonds to Earth's surface.

Diamond production has increased enormously in the 20th century. India's maximum production, perhaps 50,000 to 100,000 carats annually in the 16th century, is very small compared to the current production of around 100 million carats.

For the most part, except for major wars and economic recessions, diamond production has been steadily increasing since then, with non-African sources growing in relative proportion. Major production is now dominated by Australia, Botswana, Russia, and Congo Republic (Zaire), but South Africa is still a major producer, in both volume and value.

Have You Ever Tried to Sell a Diamond?

Edward Jay Epstein

Feb 1 1982, 12:00 PM ET

The diamond invention—the creation of the idea that diamonds are rare and valuable, and are essential signs of esteem—is a relatively recent development in the history of the diamond trade. Until the late nineteenth century, diamonds were found only in a few riverbeds in India and in the jungles of Brazil, and the entire world production of gem diamonds amounted to a few pounds a year. In 1870, however, huge diamond mines were discovered near the Orange River, in South Africa, where diamonds were soon being scooped out by the ton. Suddenly, the market was deluged with diamonds. The British financiers who had organized the South African mines quickly realized that their investment was endangered; diamonds had little intrinsic value—and their price depended almost entirely on their scarcity. The financiers feared that when new mines were developed in South Africa, diamonds would become at best only semiprecious gems.

The major investors in the diamond mines realized that they had no alternative but to merge their interests into a single entity that would be powerful enough to control production and perpetuate the illusion of scarcity of diamonds. The instrument they created, in 1888, was called De Beers Consolidated Mines, Ltd., incorporated in South Africa. As De Beers took control of all aspects of the world diamond trade, it assumed many forms. In London, it operated under the innocuous name of the Diamond Trading Company. In Israel, it was known as "The Syndicate." In Europe, it was called the "C.S.O." -- initials referring to the Central Selling Organization, which was an arm of the Diamond Trading Company. And in black Africa, it disguised its South African origins under subsidiaries with names like Diamond Development Corporation and Mining Services, Inc. At its height -- for most of this century -- it not only either directly owned or controlled all the diamond mines in southern Africa but also owned diamond trading companies in England, Portugal, Israel, Belgium, Holland, and Switzerland.

De Beers proved to be the most successful cartel arrangement in the annals of modern commerce. While other commodities, such as gold, silver, copper, rubber, and grains, fluctuated wildly in response to economic conditions, diamonds have continued, with few exceptions, to advance upward in price every year since the Depression. Indeed, the cartel seemed so superbly in control of prices -- and unassailable -- that, in the late 1970s, even speculators began buying diamonds as a guard against the vagaries of inflation and recession.

The diamond invention is far more than a monopoly for fixing diamond prices; it is a mechanism for converting tiny crystals of carbon into universally recognized tokens of wealth, power, and romance. To achieve this goal, De Beers had to control demand as well as supply. Both women and men had to be made to perceive diamonds not as marketable precious stones but as an inseparable part of courtship and married life. To stabilize the market, De Beers had to endow these stones with a sentiment that would inhibit the public from ever reselling them. The illusion had to be created that diamonds were forever -- "forever" in the sense that they should never be resold.

In September of 1938, Harry Oppenheimer, son of the founder of De Beers and then twenty-nine, traveled from Johannesburg to New York City, to meet with Gerold M. Lauck, the president of N. W. Ayer, a leading advertising agency in the United States. Lauck and N. W. Ayer had been recommended to Oppenheimer by the Morgan Bank, which had helped his father consolidate the De Beers financial empire. His bankers were concerned about the price of diamonds, which had declined worldwide.

In Europe, where diamond prices had collapsed during the Depression, there seemed little possibility of restoring public confidence in diamonds. In Germany, Austria, Italy, and Spain, the notion of giving a diamond ring to commemorate an engagement had never taken hold. In England and France, diamonds were still presumed to be jewels for aristocrats rather than the masses. Furthermore, Europe was on the verge of war, and there seemed little possibility of expanding diamond sales. This left the United States as the only real market for De Beers's diamonds. In fact, in 1938 some three quarters of all the cartel's diamonds were sold for engagement rings in the United States. Most of these stones, however, were smaller and of poorer quality than those bought in Europe, and had an average price of $80 apiece. Oppenheimer and the bankers believed that an advertising campaign could persuade Americans to buy more expensive diamonds.

Oppenheimer suggested to Lauck that his agency prepare a plan for creating a new image for diamonds among Americans. He assured Lauck that De Beers had not called on any other American advertising agency with this proposal, and that if the plan met with his father's approval, N. W. Ayer would be the exclusive agents for the placement of newspaper and radio advertisements in the United States. Oppenheimer agreed to underwrite the costs of the research necessary for developing the campaign. Lauck instantly accepted the offer.

In their subsequent investigation of the American diamond market, the staff of N. W. Ayer found that since the end of World War I, in 1919, the total amount of diamonds sold in America, measured in carats, had declined by 50 percent; at the same time, the quality of the diamonds, measured in dollar value, had declined by nearly 100 percent. An Ayer memo concluded that the depressed state of the market for diamonds was "the result of the economy, changes in social attitudes and the promotion of competitive luxuries."

Although it could do little about the state of the economy, N. W. Ayer suggested that through a well-orchestrated advertising and public-relations campaign it could have a significant impact on the "social attitudes of the public at large and thereby channel American spending toward larger and more expensive diamonds instead of "competitive luxuries." Specifically, the Ayer study stressed the need to strengthen the association in the public's mind of diamonds with romance. Since "young men buy over 90% of all engagement rings" it would be crucial to inculcate in them the idea that diamonds were a gift of love: the larger and finer the diamond, the greater the expression of love. Similarly, young women had to be encouraged to view diamonds as an integral part of any romantic courtship.

Since the Ayer plan to romanticize diamonds required subtly altering the public's picture of the way a man courts -- and wins -- a woman, the advertising agency strongly suggested exploiting the relatively new medium of motion pictures. Movie idols, the paragons of romance for the mass audience, would be given diamonds to use as their symbols of indestructible love. In addition, the agency suggested offering stories and society photographs to selected magazines and newspapers which would reinforce the link between diamonds and romance. Stories would stress the size of diamonds that celebrities presented to their loved ones, and photographs would conspicuously show the glittering stone on the hand of a well-known woman. Fashion designers would talk on radio programs about the "trend towards diamonds" that Ayer planned to start. The Ayer plan also envisioned using the British royal family to help foster the romantic allure of diamonds. An Ayer memo said, "Since Great Britain has such an important interest in the diamond industry, the royal couple could be of tremendous assistance to this British industry by wearing diamonds rather than other jewels." Queen Elizabeth later went on a well-publicized trip to several South African diamond mines, and she accepted a diamond from Oppenheimer.

In addition to putting these plans into action, N. W. Ayer placed a series of lush four-color advertisements in magazines that were presumed to mold elite opinion, featuring reproductions of famous paintings by such artists as Picasso, Derain, Dali, and Dufy. The advertisements were intended to convey the idea that diamonds, like paintings, were unique works of art.

By 1941, The advertising agency reported to its client that it had already achieved impressive results in its campaign. The sale of diamonds had increased by 55 percent in the United States since 1938, reversing the previous downward trend in retail sales. N. W. Ayer noted also that its campaign had required "the conception of a new form of advertising which has been widely imitated ever since. There was no direct sale to be made. There was no brand name to be impressed on the public mind. There was simply an idea -- the eternal emotional value surrounding the diamond." It further claimed that "a new type of art was devised ... and a new color, diamond blue, was created and used in these campaigns.... "

In its 1947 strategy plan, the advertising agency strongly emphasized a psychological approach. "We are dealing with a problem in mass psychology. We seek to ... strengthen the tradition of the diamond engagement ring -- to make it a psychological necessity capable of competing successfully at the retail level with utility goods and services...." It defined as its target audience "some 70 million people 15 years and over whose opinion we hope to influence in support of our objectives." N. W. Ayer outlined a subtle program that included arranging for lecturers to visit high schools across the country. "All of these lectures revolve around the diamond engagement ring, and are reaching thousands of girls in their assemblies, classes and informal meetings in our leading educational institutions," the agency explained in a memorandum to De Beers. The agency had organized, in 1946, a weekly service called "Hollywood Personalities," which provided 125 leading newspapers with descriptions of the diamonds worn by movie stars. And it continued its efforts to encourage news coverage of celebrities displaying diamond rings as symbols of romantic involvement. In 1947, the agency commissioned a series of portraits of "engaged socialites." The idea was to create prestigious "role models" for the poorer middle-class wage-earners. The advertising agency explained, in its 1948 strategy paper, "We spread the word of diamonds worn by stars of screen and stage, by wives and daughters of political leaders, by any woman who can make the grocer's wife and the mechanic's sweetheart say 'I wish I had what she has.'"

De Beers needed a slogan for diamonds that expressed both the theme of romance and legitimacy. An N. W. Ayer copywriter came up with the caption "A Diamond Is Forever," which was scrawled on the bottom of a picture of two young lovers on a honeymoon. Even though diamonds can in fact be shattered, chipped, discolored, or incinerated to ash, the concept of eternity perfectly captured the magical qualities that the advertising agency wanted to attribute to diamonds. Within a year, "A Diamond Is Forever" became the official motto of De Beers.

In 1951, N. W. Ayer found some resistance to its million-dollar publicity blitz. It noted in its annual strategy review:

The millions of brides and brides-to-be are subjected to at least two important pressures that work against the diamond engagement ring. Among the more prosperous, there is the sophisticated urge to be different as a means of being smart.... the lower-income groups would like to show more for the money than they can find in the diamond they can afford...

To remedy these problems, the advertising agency argued, "It is essential that these pressures be met by the constant publicity to show that only the diamond is everywhere accepted and recognized as the symbol of betrothal."

N. W. Ayer was always searching for new ways to influence American public opinion. Not only did it organize a service to "release to the women's pages the engagement ring" but it set about exploiting the relatively new medium of television by arranging for actresses and other celebrities to wear diamonds when they appeared before the camera. It also established a "Diamond Information Center" that placed a stamp of quasi-authority on the flood of "historical" data and "news" it released. "We work hard to keep ourselves known throughout the publishing world as the source of information on diamonds," N. W. Ayer commented in a memorandum to De Beers, and added: "Because we have done it successfully, we have opportunities to help with articles originated by others."

N. W. Ayer proposed to apply to the diamond market Thorstein Veblen's idea, stated in The Theory of the Leisure Class, that Americans were motivated in their purchases not by utility but by "conspicuous consumption." "The substantial diamond gift can be made a more widely sought symbol of personal and family success -- an expression of socio-economic achievement," N. W. Ayer said in a report. To exploit this desire for conspicuous display, the agency specifically recommended, "Promote the diamond as one material object which can reflect, in a very personal way, a man's ... success in life." Since this campaign would be addressed to upwardly mobile men, the advertisements ideally "should have the aroma of tweed, old leather and polished wood which is characteristic of a good club."

Toward the end of the 1950s, N. W. Ayer reported to De Beers that twenty years of advertisements and publicity had had a pronounced effect on the American psyche. "Since 1939 an entirely new generation of young people has grown to marriageable age," it said. "To this new generation a diamond ring is considered a necessity to engagements by virtually everyone." The message had been so successfully impressed on the minds of this generation that those who could not afford to buy a diamond at the time of their marriage would "defer the purchase" rather than forgo it.

The campaign to internationalize the diamond invention began in earnest in the mid-1960s. The prime targets were Japan, Germany, and Brazil. Since N. W. Ayer was primarily an American advertising agency, De Beers brought in the J. Walter Thompson agency, which had especially strong advertising subsidiaries in the target countries, to place most of its international advertising. Within ten years, De Beers succeeded beyond even its most optimistic expectations, creating a billion-dollar-a-year diamond market in Japan, where matrimonial custom had survived feudal revolutions, world wars, industrialization, and even the American occupation.

Until the mid-1960s, Japanese parents arranged marriages for their children through trusted intermediaries. The ceremony was consummated, according to Shinto law, by the bride and groom drinking rice wine from the same wooden bowl. There was no tradition of romance, courtship, seduction, or prenuptial love in Japan; and none that required the gift of a diamond engagement ring. Even the fact that millions of American soldiers had been assigned to military duty in Japan for a decade had not created any substantial Japanese interest in giving diamonds as a token of love.

J. Walter Thompson began its campaign by suggesting that diamonds were a visible sign of modern Western values. It created a series of color advertisements in Japanese magazines showing beautiful women displaying their diamond rings. All the women had Western facial features and wore European clothes. Moreover, the women in most of the advertisements were involved in some activity -- such as bicycling, camping, yachting, ocean swimming, or mountain climbing -- that defied Japanese traditions. In the background, there usually stood a Japanese man, also attired in fashionable European clothes. In addition, almost all of the automobiles, sporting equipment, and other artifacts in the picture were conspicuous foreign imports. The message was clear: diamonds represent a sharp break with the Oriental past and a sign of entry into modern life.

The campaign was remarkably successful. Until1959, the importation of diamonds had not even been permitted by the postwar Japanese government. When the campaign began, in 1967, not quite 5 percent of engaged Japanese women received a diamond engagement ring. By 1972, the proportion had risen to 27 percent. By 1978, half of all Japanese women who were married wore a diamond; by 1981, some 60 percent of Japanese brides wore diamonds. In a mere fourteen years, the 1,500-year Japanese tradition had been radically revised. Diamonds became a staple of the Japanese marriage. Japan became the second largest market, after the United States, for the sale of diamond engagement rings.

In America, which remained the most important market for most of De Beer's diamonds, N. W. Ayer recognized the need to create a new demand for diamonds among long-married couples. "Candies come, flowers come, furs come," but such ephemeral gifts fail to satisfy a woman's psychological craving for "a renewal of the romance," N. W. Ayer said in a report. An advertising campaign could instill the idea that the gift of a second diamond, in the later years of marriage, would be accepted as a sign of "ever-growing love." In 1962, N. W. Ayer asked for authorization to "begin the long-term process of setting the diamond aside as the only appropriate gift for those later-in-life occasions where sentiment is to be expressed." De Beers immediately approved the campaign.

The diamond market had to be further restructured in the mid-1960s to accomodate a surfeit of minute diamonds, which De Beers undertook to market for the Soviets. They had discovered diamond mines in Siberia, after intensive exploration, in the late 1950s: De Beers and its allies no longer controlled the diamond supply, and realized that open competition with the Soviets would inevitably lead, as Harry Oppenheimer gingerly put it, to "price fluctuations,"which would weaken the carefully cultivated confidence of the public in the value of diamonds. Oppenheimer, assuming that neither party could afford risking the destruction of the diamond invention, offered the Soviets a straightforward deal—"a single channel" for controlling the world supply of diamonds. In accepting this arrangement, the Soviets became partners in the cartel, and co-protectors of the diamond invention.

Almost all of the Soviet diamonds were under half a carat in their uncut form, and there was no ready retail outlet for millions of such tiny diamonds. When it made its secret deal with the Soviet Union, De Beers had expected production from the Siberian mines to decrease gradually. Instead, production accelerated at an incredible pace, and De Beers was forced to reconsider its sales strategy. De Beers ordered N. W. Ayer to reverse one of its themes: women were no longer to be led to equate the status and emotional commitment to an engagement with the sheer size of the diamond. A "strategy for small diamond sales" was outlined, stressing the "importance of quality, color and cut" over size. Pictures of "one quarter carat" rings would replace pictures of "up to 2 carat" rings. Moreover, the advertising agency began in its international campaign to "illustrate gems as small as one-tenth of a carat and give them the same emotional importance as larger stones." The news releases also made clear that women should think of diamonds, regardless of size, as objects of perfection: a small diamond could be as perfect as a large diamond.

DeBeers devised the "eternity ring," made up of as many as twenty-five tiny Soviet diamonds, which could be sold to an entirely new market of older married women. The advertising campaign was based on the theme of recaptured love. Again, sentiments were born out of necessity: older American women received a ring of miniature diamonds because of the needs of a South African corporation to accommodate the Soviet Union.

The new campaign met with considerable success. The average size of diamonds sold fell from one carat in 1939 to .28 of a carat in 1976, which coincided almost exactly with the average size of the Siberian diamonds De Beers was distributing. However, as American consumers became accustomed to the idea of buying smaller diamonds, they began to perceive larger diamonds as ostentatious. By the mid-1970s, the advertising campaign for smaller diamonds was beginning to seem too successful. In its 1978 strategy report, N. W. Ayer said, "a supply problem has developed ... that has had a significant effect on diamond pricing"—a problem caused by the long-term campaign to stimulate the sale of small diamonds. "Owing to successful pricing, distribution and advertising policies over the last 16 years, demand for small diamonds now appears to have significantly exceeded supply even though supply, in absolute terms, has been increasing steadily." Whereas there was not a sufficient supply of small diamonds to meet the demands of consumers, N. W. Ayer reported that "large stone sales (1 carat and up) ... have maintained the sluggish pace of the last three years." Because of this, the memorandum continued, "large stones are being .. discounted by as much as 20%."

The shortage of small diamonds proved temporary. As Soviet diamonds continued to flow into London at an ever-increasing rate, De Beers's strategists came to the conclusion that this production could not be entirely absorbed by "eternity rings" or other new concepts in jewelry, and began looking for markets for miniature diamonds outside the United States. Even though De Beers had met with enormous success in creating an instant diamond "tradition" in Japan, it was unable to create a similar tradition in Brazil, Germany, Austria, or Italy. By paying the high cost involved in absorbing this flood of Soviet diamonds each year, De Beers prevented — at least temporarily — the Soviet Union from taking any precipitous actions that might cause diamonds to start glutting the market. N. W. Ayer argued that "small stone jewelry advertising" could not be totally abandoned: "Serious trade relationship problems would ensue if, after fifteen years of stressing 'affordable' small stone jewelry, we were to drop all of these programs."

Instead, the agency suggested a change in emphasis in presenting diamonds to the American public. In the advertisements to appear in 1978, it planned to substitute photographs of one-carat-and-over stones for photographs of smaller diamonds, and to resume both an "informative advertising campaign" and an "emotive program" that would serve to "reorient consumer tastes and price perspectives towards acceptance of solitaire [single-stone] jewelry rather than multi-stone pieces." Other "strategic refinements" it recommended were designed to restore the status of the large diamond. "In fact, this [campaign] will be the exact opposite of the small stone informative program that ran from 1965 to 1970 that popularized the 'beauty in miniature' concept...." With an advertising budget of some $9.69 million, N. W. Ayer appeared confident that it could bring about this "reorientation."

N. W. Ayer learned from an opinion poll it commissioned from the firm of Daniel Yankelovich, Inc. that the gift of a diamond contained an important element of surprise. "Approximately half of all diamond jewelry that the men have given and the women have received were given with zero participation or knowledge on the part of the woman recipient," the study pointed out. N. W Ayer analyzed this "surprise factor":

Women are in unanimous agreement that they want to be surprised with gifts.... They want, of course, to be surprised for the thrill of it. However, a deeper, more important reason lies behind this desire.... "freedom from guilt." Some of the women pointed out that if their husbands enlisted their help in purchasing a gift (like diamond jewelry), their practical nature would come to the fore and they would be compelled to object to the purchase.

Women were not totally surprised by diamond gifts: some 84 percent of the men in the study "knew somehow" that the women wanted diamond jewelry. The study suggested a two-step "gift-process continuum": first, "the man 'learns' diamonds are o.k." fom the woman; then, "at some later point in time, he makes the diamond purchase decision" to surprise the woman.

Through a series of "projective" psychological questions, meant "to draw out a respondent's innermost feelings about diamond jewelry," the study attempted to examine further the semi-passive role played by women in receiving diamonds. The male-female roles seemed to resemble closely the sex relations in a Victorian novel. "Man plays the dominant, active role in the gift process. Woman's role is more subtle, more oblique, more enigmatic...." The woman seemed to believe there was something improper about receiving a diamond gift. Women spoke in interviews about large diamonds as "flashy, gaudy, overdone" and otherwise inappropriate. Yet the study found that "Buried in the negative attitudes ... lies what is probably the primary driving force for acquiring them. Diamonds are a traditional and conspicuous signal of achievement, status and success." It noted, for example, "A woman can easily feel that diamonds are 'vulgar' and still be highly enthusiastic about receiving diamond jewelry." The element of surprise, even if it is feigned, plays the same role of accommodating dissonance in accepting a diamond gift as it does in prime sexual seductions: it permits the woman to pretend that she has not actively participated in the decision. She thus retains both her innocence—and the diamond.

For advertising diamonds in the late 1970s, the implications of this research were clear. To induce men to buy diamonds for women, advertising should focus on the emotional impact of the "surprise" gift transaction. In the final analysis, a man was moved to part with earnings not by the value, aesthetics, or tradition of diamonds but by the expectation that a "gift of love" would enhance his standing in the eyes of a woman. On the other hand, a woman accepted the gift as a tangible symbol of her status and achievements.

By 1979, N. W. Ayer had helped De Beers expand its sales of diamonds in the United States to more than $2.1 billion, at the wholesale level, compared with a mere $23 million in 1939. In forty years, the value of its sales had increased nearly a hundredfold. The expenditure on advertisements, which began at a level of only $200,000 a year and gradually increased to $10 million, seemed a brilliant investment.

Except for those few stones that have been destroyed, every diamond that has been found and cut into a jewel still exists today and is literally in the public's hands. Some hundred million women wear diamonds, while millions of others keep them in safe-deposit boxes or strongboxes as family heirlooms. It is conservatively estimated that the public holds more than 500 million carats of gem diamonds, which is more than fifty times the number of gem diamonds produced by the diamond cartel in any given year. Since the quantity of diamonds needed for engagement rings and other jewelry each year is satisfied by the production from the world's mines, this half-billion-carat supply of diamonds must be prevented from ever being put on the market. The moment a significant portion of the public begins selling diamonds from this inventory, the price of diamonds cannot be sustained. For the diamond invention to survive, the public must be inhibited from ever parting with its diamonds.

In developing a strategy for De Beers in 1953, N. W. Ayer said: "In our opinion old diamonds are in 'safe hands' only when widely dispersed and held by individuals as cherished possessions valued far above their market price." As far as De Beers and N. W. Ayer were concerned, "safe hands" belonged to those women psychologically conditioned never to sell their diamonds. This conditioning could not be attained solely by placing advertisements in magazines. The diamond-holding public, which includes people who inherit diamonds, had to remain convinced that diamonds retained their monetary value. If it saw price fluctuations in the diamond market and attempted to dispose of diamonds to take advantage of changing prices, the retail market would become chaotic. It was therefore essential that De Beers maintain at least the illusion of price stability.

In the 1971 De Beers annual report, Harry Oppenheimer explained the unique situation of diamonds in the following terms: "A degree of control is necessary for the well-being of the industry, not because production is excessive or demand is falling, but simply because wide fluctuations in price, which have, rightly or wrongly, been accepted as normal in the case of most raw materials, would be destructive of public confidence in the case of a pure luxury such as gem diamonds, of which large stocks are held in the form of jewelry by the general public." During the periods when production from the mines temporarily exceeds the consumption of diamonds—the balance is determined mainly by the number of impending marriages in the United States and Japan—the cartel can preserve the illusion of price stability by either cutting back the distribution of diamonds at its London "sights," where, ten times a year, it allots the world's supply of diamonds to about 300 hand-chosen dealers, called "sight-holders," or by itself buying back diamonds at the wholesale level. The underlying assumption is that as long as the general public never sees the price of diamonds fall, it will not become nervous and begin selling its diamonds. If this huge inventory should ever reach the market, even De Beers and all the Oppenheimer resources could not prevent the price of diamonds from plummeting.

Selling individual diamonds at a profit, even those held over long periods of time, can be surprisingly difficult. For example, in 1970, the London-based consumer magazine Money Which? decided to test diamonds as a decade long investment. It bought two gem-quality diamonds, weighing approximately one-half carat apiece, from one of London's most reputable diamond dealers, for £400 (then worth about a thousand dollars). For nearly nine years, it kept these two diamonds sealed in an envelope in its vault. During this same period, Great Britain experienced inflation that ran as high as 25 percent a year. For the diamonds to have kept pace with inflation, they would have had to increase in value at least 300 percent, making them worth some £400 pounds by 1978. But when the magazine's editor, Dave Watts,tried to sell the diamonds in 1978, he found that neither jewelry stores nor wholesale dealers in London's Hatton Garden district would pay anywhere near that price for the diamonds. Most of the stores refused to pay any cash for them; the highest bid Watts received was £500, which amounted to a profit of only £100 in over eight years, or less than 3 percent at a compound rate of interest. If the bid were calculated in 1970 pounds, it would amount to only £167. Dave Watts summed up the magazine's experiment by saying, "As an 8-year investment the diamonds that we bought have proved to be very poor." The problem was that the buyer, not the seller, determined the price.

The magazine conducted another experiment to determine the extent to which larger diamonds appreciate in value over a one-year period. In 1970, it bought a 1.42 carat diamond for £745. In 1971, the highest offer it received for the same gem was £568. Rather than sell it at such an enormous loss, Watts decided to extend the experiment until 1974, when he again made the round of the jewelers in Hatton Garden to have it appraised. During this tour of the diamond district, Watts found that the diamond had mysteriously shrunk in weight to 1.04 carats. One of the jewelers had apparently switched diamonds during the appraisal. In that same year, Watts, undaunted, bought another diamond, this one 1.4 carats, from a reputable London dealer. He paid £2,595. A week later, he decided to sell it. The maximum offer he received was £1,000.

In 1976, the Dutch Consumer Association also tried to test the price appreciation of diamonds by buying a perfect diamond of over one carat in Amsterdam, holding it for eight months, and then offering it for sale to the twenty leading dealers in Amsterdam. Nineteen refused to buy it, and the twentieth dealer offered only a fraction of the purchase price.

Selling diamonds can also be an extraordinarily frustrating experience for private individuals. In 1978, for example, a wealthy woman in New York City decided to sell back a diamond ring she had bought from Tiffany two years earlier for $100,000 and use the proceeds toward a necklace of matched pearls that she fancied. She had read about the "diamond boom" in news magazines and hoped that she might make a profit on the diamond. Instead, the sales executive explained, with what she said seemed to be a touch of embarrassment, that Tiffany had "a strict policy against repurchasing diamonds." He assured her, however, that the diamond was extremely valuable, and suggested another Fifth Avenue jewelry store. The woman went from one leading jeweler to another, attempting to sell her diamond. One store offered to swap it for another jewel, and two other jewelers offered to accept the diamond "on consignment" and pay her a percentage of what they sold it for, but none of the half-dozen jewelers she visited offered her cash for her $100,000 diamond. She finally gave up and kept the diamond.

Retail jewelers, especially the prestigious Fifth Avenue stores, prefer not to buy back diamonds from customers, because the offer they would make would most likely be considered ridiculously low. The "keystone," or markup, on a diamond and its setting may range from 100 to 200 percent, depending on the policy of the store; if it bought diamonds back from customers, it would have to buy them back at wholesale prices. Most jewelers would prefer not to make a customer an offer that might be deemed insulting and also might undercut the widely held notion that diamonds go up in value. Moreover, since retailers generally receive their diamonds from wholesalers on consignment, and need not pay for them until they are sold, they would not readily risk their own cash to buy diamonds from customers. Rather than offer customers a fraction of what they paid for diamonds, retail jewelers almost invariably recommend to their clients firms that specialize in buying diamonds "retail."

The firm perhaps most frequently recommended by New York jewelry shops is Empire Diamonds Corporation, which is situated on the sixty-sixth floor of the Empire State Building, in midtown Manhattan. Empire's reception room, which resembles a doctor's office, is usually crowded with elderly women who sit nervously in plastic chairs waiting for their names to be called. One by one, they are ushered into a small examining room, where an appraiser scrutinizes their diamonds and makes them a cash offer. "We usually can't pay more than a maximum of 90 percent of the current wholesale price," says Jack Brod, president of Empire Diamonds. "In most cases we have to pay less, since the setting has to be discarded, and we have to leave a margin for error in our evaluation—especially if the diamond is mounted in a setting." Empire removes the diamonds from their settings, which are sold as scrap, and resells them to wholesalers. Because of the steep markup on diamonds, individuals who buy retail and in effect sell wholesale often suffer enormous losses. For example, Brod estimates that a half-carat diamond ring, which might cost $2,000 at a retail jewelry store, could be sold for only $600 at Empire.

The appraisers at Empire Diamonds examine thousands of diamonds a month but rarely turn up a diamond of extraordinary quality. Almost all the diamonds they find are slightly flawed, off-color, commercial-grade diamonds. The chief appraiser says, "When most of these diamonds were purchased, American women were concerned with the size of the diamond, not its intrinsic quality." He points out that the setting frequently conceals flaws, and adds, "The sort of flawless, investment-grade diamond one reads about is almost never found in jewelry."

Many of the elderly women who bring their jewelry to Empire Diamonds and other buying services have been victims of burglaries or muggings and fear further attempts. Thieves, however, have an even more difficult time selling diamonds than their victims. When suspicious-looking characters turn up at Empire Diamonds, they are asked to wait in the reception room, and the police are called in. In January of 1980, for example, a disheveled youth came into Empire with a bag full of jewelry that he called "family heirlooms." When Brod pointed out that a few pieces were imitations, the youth casually tossed them into the wastepaper basket. Brod buzzed for the police.

When thieves bring diamonds to underworld "fences," they usually get only a pittance for them. In 1979, for example, New York City police recover stolen diamonds with an insured value of $50,000 which had been sold to a 'fence' for only $200. According to the assistant district attorney who handled the case, the fence was unable to dispose of the diamonds on 47th Street, and he was eventually turned in by one of the diamond dealers he contacted.

While those who attempt to sell diamonds often experience disappointment at the low price they are offered, stories in gossip columns suggest that diamonds are resold at enormous profits. This is because the column items are not about the typical diamond ring that a woman desperately attempts to peddle to small stores and diamond buying services like Empire but about truly extraordinary diamonds that movie stars sell, or claim to sell, in a publicity-charged atmosphere. The legend created around the so-called "Elizabeth Taylor" diamond is a case in point. This pear-shaped diamond, which weighed 69.42 carats after it had been cut and polished, was the fifty-sixth largest diamond in the world and one of the few large-cut diamonds in private hands. Except that it was a diamond, it had little in common with the millions of small stones that are mass-marketed each year in engagement rings and other jewelry.
A serious threat to the stability of the diamond invention came in the late 1970s from the sale of "investment" diamonds to speculators in the United States. A large number of fraudulent investment firms, most of them in Arizona, began telephoning prospective clients drawn from various lists of professionals and investors who had recently sold stock. "Boiler-room operators," many of them former radio and television announcers, persuaded strangers to buy mail-order diamonds as investments that were supposedly much safer than stocks or bonds. Many of the newly created firms also held "diamond-investment seminars" in expensive resort hotels, where they presented impressive graphs and data. Typically assisted by a few well-rehearsed shills in the audience, the seminar leaders sold sealed packets of diamonds to the audience. The leaders often played on the fear of elderly investors that their relatives might try to seize their cash assets and commit them to nursing homes. They suggested that the investors could stymie such attempts by putting their money into diamonds and hiding them.

The sealed packets distributed at these seminars and through the mail included certificates guaranteeing the quality of the diamonds—as long as the packets remained sealed. Customers who broke the seal often learned from independent appraisers that their diamonds were of a quality inferior to that stated. Many were worthless. Complaints proliferated so fast that, in 1978, the attorney general of New York created a "diamond task force" to investigate the hundreds of allegations of fraud.

Some of the entrepreneurs were relative newcomers to the diamond business. Rayburne Martin, who went from De Beers Diamond Investments, Ltd. (no relation to the De Beers cartel) to Tel-Aviv Diamond Investments, Ltd.—both in Scottsdale, Arizona—had a record of embezzlement and securities law violations in Arkansas, and was a fugitive from justice during most of his tenure in the diamond trade. Harold S. McClintock, also known as Harold Sager, had been convicted of stock fraud in Chicago and involved in a silver-bullion-selling caper in 1974 before he helped organize DeBeers Diamond Investments, Ltd. Don Jay Shure, who arranged to set up another DeBeers Diamond Investments, Ltd., in Irvine, California, had also formerly been convicted of fraud. Bernhard Dohrmann, the "marketing director" of the International Diamond Corporation, had served time in jail for security fraud in 1976. Donald Nixon, the nephew of former President Richard M. Nixon, and fugitive financier Robert L. Vesco were, according to the New York State attorney general, participating in the late 1970s in a high-pressure telephone campaign to sell "overvalued or worthless diamonds" by employing "a battery of silken-voiced radio and television announcers." Among the diamond salesmen were also a wide array of former commodity and stock brokers who specialized in attempting to sell sealed diamonds to pension funds and retirement plans.

In London, the real De Beers, unable to stifle all the bogus entrepreneurs using its name, decided to explore the potential market for investment gems. It announced in March of 1978 a highly unusual sort of "diamond fellowship" for selected retail jewelers. Each jeweler who participated would pay a $2,000 fellowship fee. In return, he would receive a set of certificates for investment-grade diamonds, contractual forms for "buy-back" guarantees, promotional material, and training in how to sell these unmounted diamonds to an entirely new category of customers. The selected retailers would then sell loose stones rather than fine jewels, with certificates guaranteeing their value at $4,000 to $6,000.

De Beers's modest move into the investment-diamond business caused a tremor of concern in the trade. De Beers had always strongly opposed retailers selling "investment" diamonds, on the grounds that because customers had no sentimental attachment to such diamonds, they would eventually attempt to resell them and cause sharp price fluctuations.

If De Beers had changed its policy toward investment diamonds, it was not because it wanted to encourage the speculative fever that was sweeping America and Europe. De Beers had "little choice but to get involved," as one De Beers executive explained. Many established diamond dealers had rushed into the investment field to sell diamonds to financial institutions, pension plans, and private investors. It soon became apparent in the Diamond Exchange in New York that selling unmounted diamonds to investors was far more profitable than selling them to jewelry shops. By early 1980, David Birnbaum, a leading dealer in New York, estimated that nearly a third of all diamond sales in the United States were, in terms of dollar value, of these unmounted investment diamonds. "Only five years earlier, investment diamonds were only an insignificant part of the business," he said. Even if De Beers did not approve of this new market in diamonds, it could hardly ignore a third of the American diamond trade.

To make a profit, investors must at some time find buyers who are willing to pay more for their diamonds than they did. Here, however, investors face the same problem as those attempting to sell their jewelry: there is no unified market in which to sell diamonds. Although dealers will quote the prices at which they are willing to sell investment-grade diamonds, they seldom give a set price at which they are willing to buy diamonds of the same grade. In 1977, for example, Jewelers' Circular Keystone polled a large number of retail dealers and found a difference of over 100 percent in offers for the same quality of investment-grade diamonds. Moreover, even though most investors buy their diamonds at or near retail price, they are forced to sell at wholesale prices. As Forbes magazine pointed out, in 1977, "Average investors, unfortunately, have little access to the wholesale market. Ask a jeweler to buy back a stone, and he'll often begin by quoting a price 30% or more below wholesale." Since the difference between wholesale and retail is usually at least 100 percent in investment diamonds, any gain from the appreciation of the diamonds will probably be lost in selling them.

"There's going to come a day when all those doctors, lawyers, and other fools who bought diamonds over the phone take them out of their strongboxes, or wherever, and try to sell them," one dealer predicted last year. Another gave a gloomy picture of what would happen if this accumulation of diamonds were suddenly sold by speculators. "Investment diamonds are bought for $30,000 a carat, not because any woman wants to wear them on her finger but because the investor believes they will be worth $50,000 a carat. He may borrow heavily to leverage his investment. When the price begins to decline, everyone will try to sell their diamonds at once. In the end, of course, there will be no buyers for diamonds at $30,000 a carat or even $15,000. At this point, there will be a stampede to sell investment diamonds, and the newspapers will begin writing stories about the great diamond crash. Investment diamonds constitute, of course, only a small fraction of the diamonds held by the public, but when women begin reading about a diamond crash, they will take their diamonds to retail jewelers to be appraised and find out that they are worth less than they paid for them. At that point, people will realize that diamonds are not forever, and jewelers will be flooded with customers trying to sell, not buy, diamonds. That will be the end of the diamond business."

But a panic on the part of investors is not the only event that could end the diamond business. De Beers is at this writing losing control of several sources of diamonds that might flood the market at any time, deflating forever the price of diamonds.

In the winter of 1978, diamond dealers in New York City were becoming increasingly concerned about the possibility of a serious rupture, or even collapse, of the "pipeline" through which De Beers's diamonds flow from the cutting centers in Europe to the main retail markets in America and Japan. This pipeline, a crucial component of the diamond invention, is made up of a network of brokers, diamond cutters, bankers, distributors, jewelry manufacturers, wholesalers, and diamond buyers for retail establishments. Most of the people in this pipeline are Jewish, and virtually all are closely interconnected, through family ties or long-standing business relationships.

An important part of the pipeline goes from London to diamond-cutting factories in Tel Aviv to New York; but in Israel, diamond dealers were stockpiling supplies of diamonds rather than processing and passing them through the pipeline to New York. Since the early 1970s, when diamond prices were rapidly increasing and Israeli currency was depreciating by more than 50 percent a year, it had been more profitable for Israeli dealers to keep the diamonds they received from London than to cut and sell them. As more and more diamonds were taken out of circulation in Tel Aviv, an acute shortage began in New York, driving prices up.

In early 1977, Sir Philip Oppenheimer dispatched his son Anthony to Tel Aviv, accompanied by other De Beers executives, to announce that De Beers intended to cut the Israeli quota of diamonds by at least 20 percent during the coming year. This warning had the opposite effect of what he intended. Rather than paring down production to conform to this quota, Israeli manufacturers and dealers began building up their own stockpiles of diamonds, paying a premium of 100 percent or more for the unopened boxes of diamonds that De Beers shipped to Belgian and American dealers. (By selling their diamonds to the Israelis, the De Beers clients could instantly double their money without taking any risks.) Israeli buyers also moved into Africa and began buying directly from smugglers. The Intercontinental Hotel in Liberia, then the center for the sale of smuggled goods, became a sort of extension of the Israeli bourse. After the Israeli dealers purchased the diamonds, either from De Beers clients or from smugglers, they received 80 percent of the amount they had paid in the form of a loan from Israeli banks. Because of government pressure to help the diamond industry, the banks charged only 6 percent interest on these loans, well below the rate of inflation in Israel. By 1978, the banks had extended $850 million in credit to diamond dealers, an amount equal to some 5 percent of the entire gross national product of Israel. The only collateral the banks had for these loans was uncut diamonds.

De Beers estimated that the Israeli stockpile was more than 6 million carats in 1977, and growing at a rate of almost half a million carats a month. At that rate, it would be only a matter of months before the Israeli stockpile would exceed the cartel's in London. If Israel controlled such an enormous quantity of diamonds, the cartel could no longer fix the price of diamonds with impunity. At any time, the Israelis could be forced to pour these diamonds onto the world market. The cartel decided that it had no alternative but to force liquidation of the Israeli stockpile.

If De Beers wanted to bring the diamond speculation under control, it would have to clamp down on the banks, which were financing diamond purchases with artificially low interest rates. De Beers announced that it was adopting a new strategy of imposing "surcharges" on diamonds. Since these "surcharges," which might be as much as 40 percent of the value of the diamonds, were effectively a temporary price increase, they could pose a risk to banks extending credit to diamond dealers. For example, with a 40 percent surcharge, a diamond dealer would have to pay $1,400 rather than $1,000 for a small lot of diamonds; however, if the surcharge was withdrawn, the diamonds would be worth only a thousand dollars. The Israeli banks could not afford to advance 80 percent of a purchase price that included the so-called surcharge; they therefore required additional collateral from dealers and speculators. Further, they began, under pressure from De Beers, to raise interest rates on outstanding loans.

Within a matter of weeks in the summer of 1978, interest rates on loans to purchase diamonds went up 50 percent. Moreover, instead of lending money based on what Israeli dealers paid for diamonds, the banks began basing their loans on the official De Beers price for diamonds. If a dealer paid more than the De Beers price for diamonds—and most Israeli dealers were paying at least double the price—he would have to finance the increment with his own funds.

To tighten the squeeze on Israel, De Beers abruptly cut off shipments of diamonds to forty of its clients who had been selling large portions of their consignments to Israeli dealers. As Israeli dealers found it increasingly difficult either to buy or finance diamonds, they were forced to sell diamonds from the stockpiles they had accumulated. Israeli diamonds poured onto the market, and prices at the wholesale level began to fall. This decline led the Israeli banks to put further pressure on dealers to liquidate their stocks to repay their loans. Hundreds of Israeli dealers, unable to meet their commitments, went bankrupt as prices continued to plunge. The banks inherited the diamonds.

Last spring, executives of the Diamond Trading Company made an emergency trip to Tel Aviv. They had been informed that three Israeli banks were holding $1.5 billion worth of diamonds in their vaults—an amount equal to nearly the annual production of all the diamond mines in the world—and were threatening to dump the hoard of diamonds onto an already depressed market. When the banks had investigated the possibilities of reselling the diamonds in Europe or the United States, they found little interest. The world diamond market was already choked with uncut and unsold diamonds. The only alternative to dumping their diamonds on the market was reselling them to De Beers itself.

De Beers, however, is in no position to absorb such a huge cache of diamonds. During the recession of the mid-970s, it had to use a large portion of its cash reserve to buy diamonds from Russia and from newly independent countries in Africa, in order to preserve the cartel arrangement. As it added diamonds to its stockpile, De Beers depleted its cash reserves. Furthermore, in 1980, De Beers found it necessary to buy back diamonds on the wholesale markets in Antwerp to prevent a complete collapse in diamond prices. When the Israeli banks approached De Beers about the possibility of buying back the diamonds, De Beers, possibly for the first time since the depression of the 1930s, found itself severely strapped for cash. It could, of course, borrow the $1.5 billion necessary to bail out the Israeli banks, but this would strain the financial structure of the entire Oppenheimer empire.

Sir Philip Oppenheimer, Monty Charles, Michael Grantham, and other top executives from De Beers and its subsidiaries attempted to prevent the Israeli banks from dumping their hoard of diamonds. Despite their best efforts, however, the situation worsened. Last September, Israel's major banks quietly informed the Israeli government that they faced losses of disastrous proportions from defaulted accounts almost entirely collateralized with diamonds. Three of Israel's largest banks—the Union Bank of Israel, the Israel Discount Bank, and Barclays Discount Bank—had loans of some $660 million outstanding to diamond dealers, which constituted a significant portion of the bank debt in Israel. To be sure, not all of these loans were in jeopardy; but, according to bank estimates, defaults in diamond accounts rose to 20 percent of their loan portfolios. The crisis had to be resolved either by selling the diamonds that had been put up as collateral, which might precipitate a worldwide selling panic, or by some sort of outside assistance from the Israeli government or De Beers or both. The negotiations provided only stopgap assistance: De Beers would buy back a small proportion of the diamonds, and the Israeli government would not force the banks to conform to banking regulations that would result in the liquidation of the stockpile.

"Nobody took into account that diamonds, like any other commodity, can drop in value," Mark Mosevics, chairman of First International Bank of Israel, explained to The New York Times. According to industry estimates, the average one-carat flawless diamond had fallen in value by 50 percent since January of 1980. In March of 1980, for example, the benchmark value for such a diamond was $63,000; in September of 1981, it was only $23,000. This collapse of prices forced Israeli banks to sell diamonds from their stockpile at enormous discounts. One Israeli bank reportedly liquidated diamonds valued at $6 million for $4 million in cash in late 1981. It became clear to the diamond trade that a major stockpile of large diamonds was out of De Beers's control.

The most serious threat to De Beers is yet another source of diamonds that it does not control—a source so far untapped. Since Cecil Rhodes and the group of European bankers assembled the components of the diamond invention at the end of the nineteenth century, managers of the diamond cartel have shared a common nightmare—that a giant new source of diamonds would be discovered outside their purview. Sir Ernest Oppenheimer, using all the colonial connections of the British Empire, succeeded in weaving the later discoveries of diamonds in Africa into the fabric of the cartel; Harry Oppenheimer managed to negotiate a secret agreement that effectively brought the Soviet Union into the cartel. However, these brilliant efforts did not end the nightmare. In the late 1970s, vast deposits of diamonds were discovered in the Argyle region of Western Australia, near the town of Kimberley (coincidentally named after Kimberley, South Africa). Test drillings last year indicated that these pipe mines could produce up to 50 million carats of diamonds a year—more than the entire production of the De Beers cartel in 1981. Although only a small percentage of these diamonds are of gem quality, the total number produced would still be sufficient to change the world geography of diamonds. Either this 50 million carats would be brought under control or the diamond invention would be destroyed.

De Beers rapidly moved to get a stranglehold on the Australian diamonds. It began by acquiring a small, indirect interest in Conzinc Riotinto of Australia, Ltd. (CRA), the company that controlled most of the mining rights. In 1980, it offered a secret deal to CRA through which it would market the total output of Australian production. This agreement might have ended the Australian threat if Northern Mining Corporation, a minority partner in the venture, had accepted the deal. Instead, Northern Mining leaked the terms of the deal to a leading Australian newspaper, which reported that De Beers planned to pay the Australian consortium 80 percent less than the existing market price for the diamonds. This led to a furor in Australia. The opposition Labour Party charged not only that De Beers was seeking to cheat Australians out of the true value of the diamonds but that the deal with De Beers would support the policy of apartheid in South Africa. It demanded that the government impose export controls on the diamonds rather than allow them to be controlled by a South African corporation. Prime Minister Malcolm Fraser, faced with a storm of public protest, said that he saw no advantage in "arrangements in which Australian diamond discoveries only serve to strengthen a South African monopoly." He left the final decision on marketing, however, to the Western Australia state government and the mining companies, which may or may not decide to make an arrangement with De Beers.

De Beers also faces a crumbling empire in Zaire. Sir Ernest Oppenheimer had concluded, more than fifty years ago, that control over the diamond mines in Zaire (then called the Belgian Congo) was the key to the cartel's control of world production. De Beers, together with its Belgian partners, had instituted mining and sorting procedures that would maximize the production of industrial (rather than gem) diamonds. Since there was no other ready customer for the enormous quantities of industrial diamonds the Zairian mines produced, De Beers remained their only outlet. In June of last year, however, President Mobuto abruptly announced that his country's exclusive contract with a De Beers subsidiary would not be renewed. Mobuto was reportedly influenced by offers he received for Zaire's diamond production from both Indian and American manufacturers. According to one New York diamond dealer, "Mobuto simply wants a more lucrative deal." Whatever his motives, the sudden withdrawal of Zaire from the cartel further undercuts the stability of the diamond market. With increasing pressure for the independence of Namibia, and a less friendly government in neighboring Botswana, De Beers's days of control in black Africa seem numbered.

Even in the midst of this crisis, De Beers's executives in London have been maneuvering to save the diamond invention by buying up loose diamonds. The inventory of diamonds in De Beers's vault has swollen to a value of over a billion dollars—twice the value of the 1979 inventory. To rekindle the demand for diamonds, De Beers recently launched a new multimillion-dollar advertising campaign (including $400,000 for television advertisements during the British royal wedding in July), yet it can be expected to buy only a few years of time for the cartel. By the mid-1980s, the avalanche of Australian diamonds will be pouring onto the market. Unless the resourceful managers of De Beers can find a way to gain control of the various sources of diamonds that will soon crowd the market, these sources may bring about the final collapse of world diamond prices. If they do, the diamond invention will disintegrate and be remembered only as a historical curiosity, as brilliant in its way as the glittering little stones it once made so valuable.
The Diamond Myth

Articles from the past 150 years reveal the dark side of "the most brilliant of stones"

Stuart Reid
Dec 11 2006, 4:00 PM ET

It used to be that only a price tag could dissuade a would-be fiancé from buying a diamond engagement ring. But ever since the late 1980s, when bad publicity began to plague the diamond industry, guilt has become an increasingly powerful deterrent. The new film Blood Diamond, starring Leonardo DiCaprio, reflects this growing backlash, dramatizing the diamond industry’s role in Sierra Leone’s recent civil war. Although the film debuted in theaters this week, the World Diamond Council has been on the alert for months, hiring a public relations firm to defend the diamond trade and remind consumers that diamonds stand for “exquisite beauty and the timeless qualities of love and devotion.”

As a series of past articles in The Atlantic illustrates, the history of diamonds is fraught with violence, and their sentimental appeal is largely manufactured. A March 1861 article (“Diamonds and Pearls”) by Atlantic editor James T. Fields outlined the practical characteristics that had earned the diamond its place atop the gem hierarchy:

It is the most brilliant of stones, and the hardest known body. Pliny says it is so hard a substance, that, if one should be laid on an anvil and struck with a hammer, look out for the hammer! [Mem. If the reader has a particularly fine diamond, never mind Pliny’s story: the risk is something, and Pliny cannot be reached for an explanation, should his experiment fail.]

For diamonds, these remarkable material qualities translated over time into monetary value. Diamonds, the article asserted, were a solid investment:

The commercial value of gems is rarely affected, and among all articles of commerce the diamond is the least liable to depreciation. Panics that shake empires and topple trade into the dust seldom lower the cost of this king of precious stones; and there is no personal property that is so apt to remain unchanged in money-value.

But as Edward Jay Epstein uncovered in “Have You Ever Tried to Sell a Diamond?” (February 1982), the idea that diamonds make a good investment is a false one. Diamonds, he argued, are nearly impossible to sell once bought because “any gain from the appreciation of the diamonds will probably be lost in selling them.” He recounted one test conducted by a British magazine: the editor bought diamonds in 1970 and tried to sell them in 1978, but could not sell them for a price anywhere close to the one he had originally paid. Epstein also wrote of a wealthy woman who tried to resell a diamond ring she had bought for $100,000 from Tiffany & Co. in New York City. After shopping the jewel around in vain, she gave up. The problem with selling diamonds, Epstein noted, was that the buyers, not the sellers, control the price:

To make a profit, investors must at some time find buyers who are willing to pay more for their diamonds than they did. Here, however, investors face the same problem as those attempting to sell their jewelry: there is no unified market in which to sell diamonds. Although dealers will quote the prices at which they are willing to sell investment-grade diamonds, they seldom give a set price at which they are willing to buy diamonds of the same grade.

In fact, Epstein argued, the reselling of diamonds was discouraged by the diamond giant De Beers, whose livelihood depended on the perception of diamonds as “universally recognized tokens of wealth, power, and romance.” In order to stabilize the diamond market, De Beers needed to instill in the minds of consumers the concept that diamonds were forever—even though, as Epstein pointed out, “diamonds can in fact be shattered, chipped, discolored, or incinerated to ash.”

In Europe, where diamonds were thought of as jewels for the elite, the concept of giving diamond engagement rings had failed to crystallize. When prices plummeted after World War I, even wealthy Europeans lost their confidence in diamonds, and the United States became De Beers’s most promising market. In the 1930s, Epstein explained, De Beers launched a massive American advertising campaign with the help of the New York advertising agency N. W. Ayer. The campaign spawned the “diamonds are forever” motto and was credited with reviving the industry:

Since the Ayer plan to romanticize diamonds required subtly altering the public’s picture of the way a man courts—and wins—a woman, the advertising agency strongly suggested exploiting the relatively new medium of motion pictures. Movie idols, the paragons of romance for the mass audience, would be given diamonds to use as their symbols of indestructible love. In addition, the agency suggested offering stories and society photographs to selected magazines and newspapers, which would reinforce the link between diamonds and romance. Stories would stress the size of diamonds that celebrities presented to their loved ones, and photographs would conspicuously show the glittering stone on the hand of a well-known woman. Fashion designers would talk on radio programs about the “trend towards diamonds” that Ayer planned to start.

De Beers’s American marketing scheme was so successful that the increased demand for diamonds eventually spread globally. In “How to Steal a Diamond” (March 1999), Matthew Hart chronicled the effects of sky-high demand for diamonds at the other end of the pipeline, in mining countries where theft and corruption were commonplace. The black market for diamonds, he discovered, was especially prevalent in Namaqualand, a region north of Cape Town, South Africa. Miners supplemented their $350-a-month wages by smuggling diamonds from the mines and selling them to bootleggers. (The practice of smuggling was not new. In his 1861 Atlantic article, Fields had described laborers swallowing diamonds or concealing them in the corners of their eyes.) Hart explained exactly how diamonds leave the tightly-guarded mines:

Let’s say a miner spots a diamond. He may glance around to make sure that security guards are looking the other way, and press the diamond under his fingernail for later transfer to another receptacle, such as his mouth. In the event that members of the security force have been corrupted (always a possibility), he needn’t be that careful. The next step is to get the diamond out of the mining area. In one scheme workers smuggle trussed homing pigeons out to the mining areas in lunch boxes. They fit the birds with harnesses, load them with rough, and set them free. Sometimes the thieves are too ambitious. Security officials at [diamond consortium] Namdeb caught one thief when they found his pigeon dragging itself along the ground, its harness loaded beyond takeoff capacity.

These illicit diamonds, Hart explained, bankrolled the civil wars in Angola and South Africa. In South Africa, the apartheid government reportedly allowed its military to trade diamonds illegally, which entrenched the activity, Hart argued.

As another Atlantic article pointed out, the diamond industry played an integral role in South Africa’s development as a wealthy, racially divided nation. After massive stores of diamonds were discovered near the Orange River in South Africa during the nineteenth century, it was whites who became the owners of the mines and the blacks the laborers who slogged in them. “It was only some seventy or eight years ago that the gold and diamond mines fist began to call upon the labor of large numbers of Africans,” read a June 1960 Atlantic “Report on South Africa.” It updated readers on the plight of blacks in South Africa and presciently predicted the reluctance with which white South Africans would relinquish their disproportionate share of power:

South Africa is the most modern, most highly industrialized and wealthiest country in Africa, and its modernity, its industry, and its wealth all depend upon the labor of the blacks in the cities and towns and farms of South Africa. The government of South Africa is as anxious as any government anywhere else in the world to have its country increase in wealth, productivity, and power, and for this reason it never has had and never will have the intention of separating from white South Africa the black workers, out of whose toil the wealth of the country comes.

Thousands of miles north in The Congo (now the Democratic Republic of the Congo), the illicit diamond trade also flourished. A September 1963 “Report on the Congo” detailed the web of smuggling. After political unrest forced European diamond workers out of the region, smuggling surged as production continued. “Nearly everybody is in on the racket,” the report indicated before describing an incident in which a cabinet minister was caught chartering a plane loaded with 2,000 carats of stolen diamonds he intended to sell.

Thirty years later, in “Zaire: An African Horror Story” (August 1993), Bill Berkeley exposed the political instability in the same country—called Zaire at that point—identifying its ruler, President Mobutu, as hopelessly corrupt and its ruling governmental ideology as “kleptocracy.” At the center of Zaire’s corruption, of course, lay diamonds:

Zaire is one of the world’s largest producers of diamonds. Last year recorded diamond exports came to $230 million. Unrecorded exports? “Anybody’s guess,” a diplomat told me, “but certainly larger, by a substantial margin.” Reportedly, an array of mostly Lebanese diamond buyers, working with silent partners in the Central Bank and in the military, are reaping hefty profits in a complex foreign-exchange scam involving a parallel market in checks worth as much as forty times the official exchange rate. They bring in their foreign currency, exchange it for zaires with their silent partners, and then head for the diamond mines. The proceeds leaving the back door of the Central Bank are keeping afloat Mobutu's extended “family” of relatives, elite troops, ethnic kinsmen, and followers.

With films like Blood Diamond now translating these perspectives to the big screen, consumers may begin to associate the glittering stones not with love and eternity but with the turmoil they cause on the way to the jeweler’s display case. The diamond industry is, in the end, much like the diamond itself. To the untrained eye, it might appear radiant and unbreakable. But under intense magnification and scrutiny, it is flawed.

—Stuart Reid

History Overview

Horse racing is an ancient sport. Its origins date back to about 4500 BC among the nomadic tribesmen of Central Asia (who first domesticated the horse). Since then, horse racing has flourished as the sport of kings. In the modern day, horse racing is one of the few forms of gambling that is legal throughout most of the world, including the United States.

Horse racing is one of the most widely attended spectator sports in America. In 1989, over 50 million people attended 8,000 days of racing and wagered over $9 billion. Horse racing is also a popular sport in Canada, Great Britain, Ireland, the Middle East, South America and Australia.

In the United States, the most popular races comprise of Thoroughbred horses racing over flat courses between 3/4 of a mile and 1 1/4 miles. Quarter horses are also popular as well as harness racing.

Thoroughbred Racing

Since the beginning of recorded history, horse racing was an organized sport for all major civilizations around the globe. The ancient Greek Olympics had events for both chariot and mounted horse racing. The sport was also very popular in the Roman Empire.

The origins of modern racing lie in the 12th century, when English knights returned from the Crusades with swift Arab horses. During the next 4 centuries, an increasing number of Arab stallions were imported and bred to English mares in order to produce horses that possessed both speed and endurance. The nobility would wager privately on match races between the fastest of these horses.

During the reign of Queen Anne (1702-1714), horse racing began to become a professional sport. Match racing evolved into multi-horse races on which the spectators wagered. Racecourses emerged all over England, offering increasingly large purses to attract the best horses. The purses made breeding and owning horses for racing more profitable. The rapid expansion of the sport created the need for a central governing authority. In 1750 racing's elite met at Newmarket to form the Jockey Club. This organization still regulates English racing to this day.

The Jockey Club wrote rules of racing and sanctioned racecourses to conduct meetings. Standards defining the quality of races resulted in the designation of specific races as the ultimate tests of excellence. Since 1814, 5 races for 3 year old have been called "classics." The English Triple Crown is made up of 3 races (open to colts and fillies): the 2,000 Guineas, the Epsom Derby and the St. Leger Stakes. There are two classic races open to fillies only: the 1,000 Guineas and the Epsom Oaks.

The Jockey Club also worked to regulate racehorse breeding. James Weatherby, whose family did accounting for members of the Jockey Club, was given the duty of tracing the pedigree of every racehorse in England. In 1791, he published the results of his research as the Introduction to the General Stud Book. From 1793 to today, members of the Weatherby family have recorded the pedigree of every descendant of those racehorses in subsequent volumes of the General Stud Book. By the early 1800s, the only horses that were allowed to race were those who descended from the horses listed in the General Stud Book. There horses were called "Thoroughbreds". Every thoroughbreds can be traced back to one of three stallions, called the "foundation sires." These stallions were the Byerley Turk (foaled c.1679), the Darley Arabian (foaled c.1700) and the Godolphin Arabian (foaled c.1724).

Thoroughbred Racing in America

British settlers brought horses (and horse racing) to America. The first racetrack was laid out on Long Island in 1665. Although the sport was a popular local sport for some time, organized racing did not exist until after the Civil War in 1868 (when the American Stud Book was started). For the next several decades, during the industrial expansion, gambling on racehorses, and horse racing itself, exploded. By 1890, there were 314 tracks operating across the United States.

The rapid growth of horse racing without a governing authority led to the domination of many tracks by criminal elements. In 1894, the nation's biggest track and stable owners met in New York to form an American Jockey Club. This organization was modeled on the English and it soon ruled racing with an iron fist and eliminated much of the corruption.

In the early 1900s, racing in the United States was almost wiped out by antigambling sentiment that led almost all states to ban bookmaking. By 1908, only 25 tracks remained. That same year, pari-mutuel betting on the Kentucky Derby was introduced and it created a turnaround for the sport. Many state legislatures agreed to legalize pari-mutuel betting in exchange for a cut of the money wagered. As a result, more tracks opened. By the end of World War I, prosperity and great horses like Man o' War brought spectators flocking to racetracks. Horse racing flourished until World War II. The sport then lost popularity during the 1950s and 1960s. There was a resurgence in the 1970s, triggered by the huge popularity of great horses such as Secretariat, Seattle Slew, and Affirmed. Each of these horses won the American Triple Crown (the Kentucky Derby, the Preakness and the Belmont Stakes). However, during the late 1980s to today, another significant decline occurred. This can be attributed to the fact that there has been a long drought without a Triple Crown winner.

Thoroughbred tracks exist in about half the states. General public interest focuses on major Thoroughbred races such as the Triple Crown and the Breeder's Cup races (which begun in 1984). These races offer purses in excess of $1,000,000. State racing commissions have sole authority to license participants and grant racing dates, while sharing the appointment of racing officials and the supervision of racing rules with the Jockey Club. The Jockey Club retains authority over the breeding of Thoroughbreds.


Although science has been unable to come up with a proven breeding system to generate champions, breeders over the centuries have become increasingly successful in breeding Thoroughbreds who perform well at the racetrack by following two basic principles. The first is that Thoroughbreds with superior racing ability are more likely to produce successful offspring. The second is that horses with certain pedigrees are more likely to pass along their racing genes to their offspring.

Male Thoroughbreds (stallions) have the highest breeding value because they can mate with about 40 mares a year. The value of champions, especially winners of Triple Crown races, is so high that groups of investors called breeding syndicates may be formed. Each of the approximately 40 shares of the syndicate, entitles its owner to breed one mare to the stallion each year. One share of a champion horse, may cost millions of dollars. A share's owner can resell that share at any time.

Farms that produce foals for sale at auction are called commercial breeders. The most successful are E. J. Taylor, Spendthrift Farms, Claiborne Farms, Gainsworthy Farm, and Bluegrass Farm (all located in Kentucky). Farms that produce foals to race themselves are called home breeders, and these include such famous stables as Calumet Farms, Elmendorf Farm, and Green-tree Stable in Kentucky and Harbor View Farm in Florida.


Wagering on the outcome of horse races has been the main source of the appeal of the sport since the beginning is the sole reason horse racing has survived as a major professional sport.

All betting at American tracks today is done using a pari-mutuel wagering system, which was developed by a Frenchman named Pierre Oller in the late 19th century. Under this system, a fixed percentage (usually 14%-25%) of the total amount wagered is taken out for racing purses, track operating costs and state and local taxes. The remaining sum is divided by the number of individual correct wagers to determine the payoff on each bet. The projected payoff, or "odds," are continuously calculated and posted on the track toteboard during the open betting period before each race. For example, odds of "2-1," means that the bettor will receive $2 profit for every $1 wagered ($3 total returned) if the horse wins.

Bettors may wager on a horse to win (finish first), place (finish first or second), or show (finish first, second, or third). Other popular wagers are the daily double (picking the winners of two consecutive races), exactas (picking the first and second horses in order), quinellas (picking the first and second horses in either order), and the pick six (picking the winners of six consecutive races).

The history of organised, modern horseracing in Britain dates back to the 17th century. Prior to this date, evidence of the origins of horseracing is sketchy. There are records of horseracing during Roman times, and in the 12th century, racing is known to have taken place on public holidays at Smithfield in London, and at Chester, where records exist of �Shrove Tuesday� races.

Horseracing first came under royal patronage during the reign of James I, when the monarch had a royal palace built near Newmarket � then an obscure village. Members of the Royal Court, who had developed a passion for horseracing in Scotland, helped to establish Newmarket as the home of organised horseracing in Britain. Public races were soon set up all over England. Many of the events were held at �Bell Courses�. They got this name because the prize for most races was usually a silver bell.

King Charles I and Charles II maintained horseracing�s royal patronage, and the royal association with Newmarket also continued. Charles II was perhaps the most enthusiastic racing royal. He competed in races himself, and founded a series of races known as Royal Plates. His connection with Newmarket survives to this day, because the Rowley Mile course near the town is derived from his nickname of �Old Rowley� � in turn after the name of his favourite hack.

As horseracing became all the rage thanks to its royal connections, the breeding of racehorses developed very rapidly too. This was mainly thanks to the import of Arabian stallions, with which British mares were bred to create the forefathers of the Thoroughbred racehorses we see racing today.

Around the middle of the 18th century, horseracing became the first regulated sport in Britain, thanks to the formation of the Jockey Club. Before this time, most horseraces took the format of �match races� (contested by just two horses), run over much longer distances then Flat racing today.

Gradually, the emphasis on stamina was replaced by racing younger horses over shorter distances. The late 18th century saw the establishment of the Classic races which are still run today. The St Leger, the Oaks and the Derby were all founded between 1776 and 1780.

The arrival of better transport links and other technological innovations in the 19th century led to horseracing becoming a sport watched by millions of people each year. Leading newspapers began to give horseracing far more coverage, and there was a marked increase in the volume of betting on races.

The arrival of professional on-course bookmakers into the sport brought with it different challenges. The Jockey Club reacted by establishing high standards of order, discipline and integrity to ensure the sport continued to prosper.

In the 20th century, horseracing was one of the only sports to continue during both world wars, albeit on a very limited scale. After World War Two, racecourses benefited from the introduction of many technical innovations, such as the photo finish (first used in 1947) and starting stalls for Flat races (1965). In 1961, betting away from racecourses became legalised, and the high street betting shop was born - dramatically increasing the volume of betting turnover.

The arrival of the mass medium of television in the 1950s and 60s put the sport into the nation�s living rooms, as horseracing became a regularly televised sport. Even today, horseracing is the second most widely televised sport after football.

In the early 21st century, racecourse attendance has become increasingly popular. After a drop in attendance in the 1970s and 80s, racing posted an attendance figure of 6 million in 2004.
What follows is the first chapter, A History of Horse Racing. This 3-segment history traces horse racing from its development overseas to its beginnings in the United States.

Cocktails with the Sport of Kings

As early as 1140, the first of a long line of kings named Henry tried to improve Hobby horses--pony-sized Irish horses--by importing Arab stallions to give them more speed and stronger power. Throughout the Crusades, from 1096 to 1270, Turkish cavalry horses dominated the larger English warhorses, leading the Crusaders to buy, capture or steal their share of the stallions. After the War of the Roses, which decimated England's horse population, King Henry aimed to rebuild his cavalry. Both the king and his son, Henry VIII, imported horses from Italy, Spain and North Africa, and maintained their own racing stable. Henry's Hobbys, as they were called, raced against horses owned by other nobility, leading the word "hobby" to mean a "costly pastime indulged in by the idle rich." It also lends credibility to horse racing being labeled as the Sport of Kings, although this phrase's origination comes later, as found in Part II.

Henry used tax revenues to maintain his stables, claiming that by breeding winners with winners he could improve the quality of the cavalry. While certainly a landmark philosophy in horse racing, Henry was unable to apply its practice; his Master of the Horse, the title of Henry's racing stable director, was not a professional horseman and recklessly crossbred the entire stable. The stable consisted of a variety of international horses with an even wider mix of genes, so well mixed they earned the moniker "cocktails," our current word for a mixed drink. It is not known for sure, but this may be the oldest piece of evidence linking horse racing with drinking!

Anyway, Henry's daughter, Elizabeth I, drastically improved her father's stable during her fifty-year reign, dispensing of horses not qualified for racing or the cavalry and moving the best horses to new barns at Tutbury near Staffordshire. Elizabeth kept a close watch on matings and systematically recorded pedigrees. On the advice of her Master of the Stable the Queen added more Arabian horses to the stable, breeding Arab stallions to Hobby and Galloway (Scottish) mares. When Elizabeth I died, James VI of Scotland, son of Mary, Queen of Scots, and his son, Charles--who became king in 1625--expanded both the palace and royal racing stables at the track of Newmarket. In 1647 Oliver Cromwell's army defeated Charles' Cavaliers, forcing Charles back to Scotland and allowing Cromwell to capture the royal stables at Tutbury and take inventory; he swiftly sold most of the Royal Mares, keeping fewer than 100 to breed stronger, lighter horses to replace the slower, heavier ones no longer suited for warfare due to the development of gunpowder.

Cromwell's focus was on the cavalry, not racing. He even passed several laws prohibiting racing and went so far as to confiscate horses and cause pedigree records to be ruined. Royalists and Cavaliers were either forced out of England or in retreat to their country estates where they could do two things: maintain their records of horses bred for stag hunting and racing, and wait for the end of Cromwell's repressive religious throne. When Cromwell died and Charles II became king, the wait was over.

Kentucky Horse Racing

This second part of the history lesson introduces you to the world of Kentucky horse racing. How ironic that this chapter of horse racing is influenced not by thoroughbreds but by a different type of horse: the Iron Horse.

Foundations: Bluegrass and Lexington

After the Revolutionary War more and more immigrants poured into Kentucky and horse racing became more and more of a Kentucky institution. At the 1775 Transylvania Convention Daniel Boone introduced the first bill "to improve the breed of horses in the Kentucky territory." Many Kentucky settlements--with the notable exception of Louisville which already had a race track--featured a Race Street, a straight stretch located just off the main thoroughfare and named after what went on there. In 1797 Kentucky's first Jockey Club was founded at a formal race meet, then was reorganized as the Lexington Jockey Club in 1809. (Kentucky statesman Henry Clay was a founding member.)

Helping Kentucky establish its foothold on horse racing was Virginia's struggle to maintain racing under the burden of religious censure and bad business practices. The Revolution had damaged their stock, which was then replaced by low quality horses from dishonest British horse merchants. Diomed, who never performed up to par after his 1780 Epsom Darby win, was them sent to stud, although his sires floundered in England. He was then considered useless and thereby suitable for trade to America and shipped to Virginia in 1800. For some reason his luck changed in Virginia; each year his crop of sire champions grew, even to the point of joining the line of Aristides, who won the first Kentucky Derby in 1875.

The War of 1812 took a heavy toll on horses. Afterwards, racing was slow to recover in the South and reformers shut it down entirely in the North and East. Lexington, however, always had a track where owners competed their best homebreds; horsemen quickly realized there was no equal to the Bluegrass when it came to nurturing pedigreed stock. Bluegrass, for those who have always wondered, is a deep-rooting, thin bladed hardy perennial (a plant that lives for an indefinite number of years), native to the steppes of the Black Sea. Some credit Quaker leader William Penn with its importation, but the seed probably came to America in the pockets of Mennonites ousted from Russia, for whom Pennsylvania was a safe haven before they headed west. Settlers quickly cleared land, maximizing the pasture available for grazing not just for horses but for hogs, sheep and cattle as well.

In 1826, 60 prominent Bluegrass businessmen organized the Kentucky Association for the Improvements of Breeds of Stock. Thoroughbred breeding records were too jumbled at that point in time, however, and a centralized breed registry comparable to Weatherby's General Stud Book (see Part I) until Lexington native Col. Sanders D. Bruce compiled the American Stud Book in 1868. Nevertheless, the first Kentucky Association races took place at the mile-log, circular Old William Track in Lee's Wood, before a course was laid out nearer to downtown Lexington; it was later remodeled in 1832, becoming America's second mile-long, fenced, dirt track. By 1850 landlocked Lexington lacked only one thing: a railroad system with direct access to Ohio River trade. The lack of cheap transportation greatly handicapped farmers, whose incomes were linked to their ability to ship their wares around the country. As trade became pivotal to economic survival, Lexington became reliant on the Louisville & Nashville Railroad, Louisville's "iron horse."

By contrast, Louisville was a swampy, riverfront settlement named for France's King Louis XVI, and had developed from Portland, where the treacherous Falls of the Ohio frequently forced hapless travelers ashore. From the beginning, Louisville was a brawling river town, home to successive waves of German and Irish immigrants making their way up river from New Orleans. They became hardworking citizens but had no desire or money to buy and race thoroughbreds. Townspeople with English roots, however, organized the Louisville Jockey Club and arranged matches down by the river. Another track was built on the east side of the county, and both tracks flourished and gave rise to the notion of the "Louisville races."

History of Churchill Downs Kentucky Derby

Read about the origins of America's most legendary track, Churchill Downs, and follow the history of that track's most prestigious race from its inception in 1875 up through modern times.

This segment of the All Horse Racing history lesson introduces you to the figures most responsible for the Derby being what it is today, the most exciting two minutes in sports and the greatest social event in all of Kentucky if not the United States or even the entire world.

Churchill's Colonel

Meriwether Lewis Clark, Jr. was the grandson of former Missouri governor and Lewis and Clark Expedition co-leader General William Clark. He was the great nephew of Louisville founder George Rogers Clark. Most important to our story here is that his father married Abigail Prather Churchill and linked him to one of Kentucky's first families. The Churchills had moved to Louisville in 1785 and purchased 300 acres of land, part of which today includes Churchill Downs. When his mother dies, Lutie, as M. Lewis Clark, Jr. was nicknamed, was sent to live with his aunt and her sons John and Henry, who inherited most of the property. Lutie developed tastes for custom-made suits, good food, champagne and horse racing--he even sat besides his uncles at the Woodlawn Race Course (see Part II). Through two trips to Europe and through convenient marriages and deaths, Lutie gained both a sophisticated appreciation for horse racing and as well strong connections to several Southern racetracks.

Late in 1873, Clark came home from abroad with ideas about how to build a racetrack--and to eliminate bookmaking by using French pari-mutuel wagering machines. (The "pari" in pari-mutuel is actually a shortened reference to Paris.) The Churchill family, obviously, financially backed Lutie's racetrack venture. With a need to showcase their racing stock, especially in light of the Cincinnati-Lexington railroad connection (also see Part II), a new racetrack was decided upon and was to be built on Churchill land. The Churchill brothers were the entrepreneurs who arranged the new Louisville Jockey Club and Driving Park Association while Lutie acted as president an on-site manager. Half the Louisville Jockey Club members were local bankers, hotel men and streetcar company owners; the other half were those with large farms and whiskey interests. By selling 320 shares of stock at $100 a share the Louisville Jockey Club came up with $32,000 to build a racetrack. Plans for the track were announced in June 1874: in spring and fall the facility would be devoted to racing, at other times people were free to use the grounds for carraige driving (hence the Driving Park Association as part of the Jockey Club's name). Noted architect John Andrewartha was retained to design the grandstand and Jockey Club headquarters. Located on the first turn--as far away from the stable area as possible--the rustic two-story retreat featured a kitchen and the only indoor toilet within miles.

Derby Day, 1875

The track opened amid great hoopla on May 17, 1875. Believe it or not, the Kentucky Derby was not planned as the main attraction of the inaugural meet, but when H.P. McGrath's Aristides set a new world's record for the mile-and-a-half distance, "the crowd went wild." Still, racing three-year olds was a relatively new venture, and there were two other races that day which were bigger than the Derby: the Louisville Cup, discontinued after 1887, and the Gentlemen's Cup Race, in which a member of a recognized jockey club rode his own horse. After only one year, Lutie Clark and the track were considered a success.

At this point it is to remember the family history between the Clark's and the Churchill's, and how the relationship ties into the origins of the track. "Colonel" Lutie Clark became more and more obsessed with his racetrack endeavors, and eventually split from his wife. By 1886 Mary Clark and her three children were living with widower John Churchill. In November 1890, John Churchill, then 71 and a widower for 30 years, remarried. His wife was 36-year old Tina Nicholas, who was from a Kentucky family as distinguished as the Churchills. Their son, John Rowan Pope Churchill, Jr. was born 10 months later. As early as 1884 the Churchills began writing wills. Henry Churchill's first will left his wife Julia the family house, $60,000 and $800 yearly rent from the track. In 1889, two years before he died, Henry Churchill rewrote his will, leaving his entire estate to his wife. By mutual agreement, the Louisville Jockey Club land became John's property. Had John not remarried in 1890, Lutie Clark might have inherited the land the track was on, but the birth of a health Churchill heir changed that.

Lutie Clark was quite a figure, both literally and figuratively. At one point he could barely climb the stair to the top of the grandstand to monitor races. He was often seen as bullyish, and was quick to judgment and to express opinion. He was also one easily led into a quarrel, often leading to his own embarrassment:

•In 1879 Clark refused prominent breeder T.G. Moore permission to race at the Louisville Jockey Club, claiming Moore's entry fees were past due. Moore took the announcement as a personal insult and demanded an apology; the Colonel refused and ordered Moore out of his Galt House office. When Moore told Clark he would bear the consequences of his decision not to apologize, Clark knocked Moore to the ground, held a gun on him and ordered him off the premises. Moore left the room, got a gun and shot Clark through the door--the bullet hit Clark in the chest, lodging under his right arm. Moore turned himself in at the police station, but no charges were brought. He was subsequently ruled off the track due to the dispute over the fees, yet Clark reversed the decision a year later.

•Several years later, when Clark was working as a steward at a Chicago track, a bartender at Clark's hotel took offense at Clark's calling Chicagoans "thieves and liars," and told Clark so. Clark took off, then returned with a gun, rested the muzzle on the bartender's chest and forced the man to apologize. There were plenty of witnesses, because the story was retold in both Chicago and Louisville newspapers. The Churchill brothers were not pleased with the negative publicity.

In 1891 John Churchill wrote his will. His wife Tina was named administrator; she and their son got everything minus 46 acres allotted to Clark and his three children. Lutie Clark was to choose the best land from property adjoining the Louisville Jockey Club grounds, but John made it clear that Clark was to have no part of the track itself. By the time John died in 1897 Clark was merely serving as a steward at Churchill Downs, the track he helped to originate. His attitude, personality, disregard of track and city expenses and other unappealing personality traits placed him in ill-repute with almost everyone including his family. But his contributions to the track and to the Derby cannot be overlooked; without him the Derby never would have been born and Churchill Downs might never have gotten off the ground. "He brought the first pari-mutuel wagering machines into Kentucky and tried, although without success, to get the public to use them (instead of relying on bookmakers). He presided over the first American Turf Congress, held at Louisville's Galt House Hotel, and wrote racing rules that are still in force today. He worked for a uniform system of weights and pioneered the stakes system, creating the Great American Stallion Stake--on which the present-day Breeders' Cup is modeled. Clark spoke out against betting by officials and reporters, which certainly did not endear him to the press. His only Derby wager, he bragged, was the price of a new hat."

Yet during the 1880s his reputation as an arrogant, quick-tempered man grew. He was not well liked by locals, who took to calling the track "Churchill's downs," a reference to English racing that poked fun at the "highfalutin" president as well as reminding him who really controlled the Louisville Jockey Club's purse strings. In 1883 the local press picked up on the nickname, which has become the track's incorporated, trademarked name.

No doubt Clark, who did his gambling on the stock market, lost heavily in 1893 when the economy exploded so badly that the New York Stock Exchange closed for ten days. He traveled from city to city, working as a steward at regional tracks. In 1899, fearing a life of poverty and senility, Clark committed suicide. The Louisville Jockey Club had gone through virtual bankruptcy and a total change of command. Shortly after the takeover, a new double-steepled grandstand took shape after the track was purchased by bookmakers who styled themselves the New Louisville Jockey Club. They ran the Downs for the next eight years, discontinuing the money-losing Fall Meets after 1895. The new owners took over the Churchill land lease and built a twin-spired wooden grandstand on the west side of the track. There was no clubhouse for members and no separate seating for the ladies, but there was a brand new 60-foot wide, 200-foot long, brick-floored betting enclosure only a stone's throw from the new saddling paddock. The myth of racing as sport was replaced by the reality of a view-and-bet gambling operation. Churchill Downs and the Kentucky Derby had survived its birth, and would now need to endure a few more growing pains before finally becoming entrenched in racing, sports, Kentucky and American culture.
A covered bridge is a timber-truss bridge with a roof and siding which, in most covered bridges, create an almost complete enclosure.[1] The purpose of the covering is to protect the wooden structural members from the weather. Uncovered wooden bridges have a life span of only 10 to 15 years because of the effects of rain and sun.[2]

Bridges having covers for reasons other than protecting wood trusses, such as for protecting pedestrians, are also sometimes called covered bridges.

History and development


Early timber covered bridges consisted of horizontal beams laid on top of piles driven into the riverbed. The problem is that the length between spans is limited by the maximum length of each beam. The development of the timber truss allowed bridges to span greater distances than those with beam-only structures or arch structures, whether of stone, masonry, or timber.[3]

Early European truss bridges used king post and queen post configurations. Some early German bridges included diagonal panel bracing in trusses with parallel top and bottom chords.[3]

At least two covered bridges make the claim of being the first built in the United States. Town records for Swanzey, New Hampshire, indicate their Carleton Bridge was built in 1789, but this remains unverified.[4] Philadelphia, however, claims a bridge built in the early 1800s on 30th Street and over the Schuylkill River was the first, noting that investors wanted it covered to extend its life.[5] Beginning around 1820, new designs were developed, such as the Burr, Lattice, and Brown trusses.

In the mid-1800s, the development of cheaper wrought iron and cast iron led to metal rather than timber trusses, except in those areas of plentiful large timber

Examples of covered bridges


There are about 1600 covered bridges in the world.[6]
Canada: the Hartland Bridge is the longest covered bridge in the world. In 1900, Quebec, New Brunswick, and Ontario had an estimated 1000, 400, and five covered bridges respectively. By the 1990s there were 98 in Quebec,[7] 62 in New Brunswick,[8] and one in Ontario, the West Montrose Covered Bridge.[9]
China: covered bridges are called lángqiáo (廊桥), or "wind and rain bridges" in Guizhou, traditionally built by the Dong. There are also covered bridges in Fujian.[10] Taishun County, in southern Zhejiang province near the border of Fujian, has more than 900 covered bridges, many of them hundreds of years old, as well as a covered bridge museum.[11][12] There are also a number in nearby Qingyuan County, as well as in Shouning County, in northern Fujian province. The Xijin Bridge in Zhejiang is one of the largest.
Germany: Holzbrücke Bad Säckingen, over the river Rhine from Bad Säckingen, Germany, to Stein, Switzerland (picture), first built before 1272, destroyed and re-built many times.
Switzerland has many timber covered bridges:[13] Bridge over the river Muota, Brunnen, near Lake Lucerne (picture), Berner Brücke/Pont de Berne over the Saane/Sarine, near Fribourg, (picture), Kapellbrücke.
USA: The FHWA encourages the preservation of covered bridges with its Covered Bridge Manual.[14] There are bridges in California, for example at Knight's Ferry,[15] North Carolina,[16] Pennsylvania, Georgia, Oregon, New Hampshire, New York, Ohio, Alabama, Tennessee,[17] Illinois, Indiana, Iowa, Maine, six covered briges in Maryland,[18] Michigan, Missouri, Connecticut, Massachusetts, Rhode Island,[19] Vermont, Virginia, and West Virginia.

Other covered bridges


The term covered bridge is also use to describe any bridge-like structure that is covered. For example
The Lovech Covered Bridge in Bulgaria is covered not for structural reasons, but to accommodate shops.
The Pont de Rohan in Landerneau, France is one of 45 inhabited bridges in Europe.
A tubular bridge is a bridge built as a rigid box girder section within which the traffic is carried.[20] Examples include the Britannia Bridge and the Conwy Railway Bridge in the United Kingdom.
A skyway is a type of urban pedway consisting of an enclosed or covered footbridge between two buildings, designed to protect pedestrians from the weather. For example, the Bridge of Sighs in Cambridge, and Oxford's Bridge of Sighs and Logic Lane covered bridge.
A jet bridge is an enclosed, movable connector which extends from an airport terminal gate to an airplane, allowing passengers to board and disembark without having to go outside.[21]
Some stone arch bridges are covered to protect pedestrians or as a decoration as with the Italian Ponte Coperto and Rialto Bridge, and the Chùa Cầu (the Japanese Bridge; picture) in Vietnam.

Covered bridges in fiction

Covered bridges are popular in folklore[22] and fiction.

North American covered bridges received much recognition as a result of the success of the novel, The Bridges of Madison County written by Robert James Waller and made into a Hollywood motion picture starring Meryl Streep and Clint Eastwood. The Roseman Covered Bridge from 1883 in Iowa became famous when it was featured in both the novel and the film. A covered bridge is also prominently featured in the story Never Bet the Devil Your Head by Edgar Allan Poe, and covered bridges serve as plot points in the 1988 comedy films Beetlejuice and Funny Farm.



covered bridge, timber-truss structure carrying a roadway over a river or other obstacle, popular in folklore and art but also of major significance in engineering history. The function of the roof and siding, which in most covered bridges create an almost complete enclosure, is to protect the wooden structural members from the weather. A truss is a basic form in which the members are arranged in a triangle or series of triangles.

There is no evidence of timber-truss bridges, with or without covering, in the ancient world, but the 13th-century sketchbook of the French architect Villard de Honnecourt depicts a species of truss bridge, and the Italian Andrea Palladio’s “Treatise on Architecture” (1570) describes four designs. Several notable covered bridges were constructed in Switzerland. The Kappel Bridge (1333) of Luzern has been decorated since 1599 with 112 paintings in the triangular spaces between roof and crossbeams, depicting the history of the town and the lives of its two patron saints. In the 18th century the Grubenmann brothers of Switzerland built covered timber bridges of considerable length, notably an arch-truss bridge over the Limmat River in Baden with a clear span of 200 feet (61 m).


In North America the covered bridge underwent further evolution. From simple king-post trusses, in which the roadway was supported by a pair of heavy timber triangles, New England carpenters in the 18th century developed bridges combining simplicity of construction with their other economic advantages. The first long covered bridge in America, with a 180-foot (55-metre) centre span, was built by Timothy Palmer, a Massachusetts millwright, over the Schuylkill River at Philadelphia in 1806. Covered timber-truss bridges soon spanned rivers from Maine to Florida and rapidly spread westward. A New Haven architect named Ithiel Town patented the Town lattice, in which a number of relatively light pieces, diagonally crisscrossed, took the place of the heavy timbers of Palmer’s design and of the arch; it could be “built by the mile and cut off by the yard,” in its inventor’s phrase. Another highly successful type was designed by Theodore Burr, of Torrington, Conn., combining a Palladio truss with an arch. Numerous Town and Burr designs remained standing throughout North America into the late 20th century, some dating back to the early 19th century.

To carry the heavy loadings of the railroad, iron was adopted for covered bridges, at first for only part of the truss, in either vertical or diagonal members, and later for the whole truss. Iron was soon replaced by steel, and a principal form of the modern railroad bridge rapidly evolved. The metal truss did not require protection from the weather and consequently was not covered. The building of covered timber bridges, however, continued even in the second half of the 20th century.



Chapter 3. Historical Development of Covered Bridges

A brief perspective of the historical development of covered bridges is provided in this chapter. Additional, in-depth information is available in many of the references. Perhaps the best indepth discussion of this topic is that authored by J. G. James. In 1982, he prepared a compendium entitled, "The Evolution of Wooden Bridge Trusses to 1850."[5] His acknowledgements and apologies humbly explain that he prepared the material as an offshoot to his real love of iron trusses, for which he had prepared an earlier paper. The material was reprinted more recently in the United States, in 1997 and 1998 issues of Covered Bridge Topics.[6] Other sources provide even more distilled and generic summaries of the evolution of truss development, although it is very difficult to accurately portray such rapidly changing, complex events.

Figure 20 shows one of the rare double-barrel covered bridges and one of the older in the United States. The sidewalk on the left is a more recent addition.


The following historical context is intended to describe some of the challenges surmounted by those engineers and contractors who have built bridges that spanned distances longer than the longest available timbers.

The Development of Truss Concepts in Europe

Andrea Palladio, a Venetian architect (1518-1580), is usually credited as the first to describe the form of structure we recognize as a truss, as presented in his Four Books of Architecture, more commonly referred to as his Treatise on Architecture, or simply Treatise, circa 1570. Yet some say that he was really only the first to publish information known to many at that time, including examples constructed (and possibly still extant) in Switzerland. In either event, little attention was paid to his writings until the middle of the 18th century, when European nations began building the bridges required for significant transportation systems. Although France had been the leader in early engineering, based primarily on their advances in stone and arch theory and construction, the Swiss and Germans were devoting more attention to using timber trusses in their bridges. Most timber bridges in Europe were not covered, although the oft-cited Schaffhausen Bridge over the Rhine River, constructed by the Grubenmann brothers in 1758, which included an awkward and inefficient timber roof, was an impressive two-span (52.1-m (171-ft) and 58.8-m (193-ft)) bridge. Many of the other early examples of covered bridges stemmed from efforts to provide roofed galleries, usually over simple pile and beam bridges, dating back many centuries.

These early timber covered bridges were somewhat primitive; they consisted of piles driven into the riverbed, with timber beams spanning longitudinally between pile caps. The covers were more for the convenience of users who wanted to linger on the pleasant bridge setting. To span deeper rivers or gorges, the 18th century builders found piers to be costly, if not impractical, and they began looking for ways to span greater distances. They did not move directly to pure truss forms; they first used some versions of braced beams. Early German and Swiss truss bridges relied on kingpost and queenpost configurations with modifications to add arch action, via a strutted beam. Some of the German bridges included diagonal panel bracing in trusses with parallel top and bottom chords. The Swiss often relied more on ever-heavier timber framing, without many diagonal members. They preferred to build very deep beams, using mechanical connectors between stacked layers-an effort at laminating deep members from smaller members without relying on structural adhesives. Other developments in the evolution of timber truss bridges followed in several other European countries, but early bridge building in the United States really led to the most significant advancements in the theory of truss behavior.

Early Truss Construction in the United States

Americans who wanted to travel inland from coastal areas immediately faced the need to span streams of various sizes. Those sites conducive to pile driving were crossed with the classic multiple-span, timber stringer structures. Deeper water demanded longer spans. The gradual developments in Europe provided insufficient guidance to the American pioneers faced with a compelling need to build so many and such demanding structures as fast as they were needed. As might be expected, enterprising and ingenious American craftsmen, business people, and visionaries forged ahead, willing to test a myriad of structure styles to meet the demand for safe waterway crossings. Some of these structures were modeled after examples in Europe, while others clearly included ideas unique to the Americans.

A notable advancement in timber bridge building was the crossing of the Connecticut River at Bellows Falls, VT. Colonel Enoch Hale used a two-span structure with total length of 111 m (365 ft). The supporting structure was a strutted beam; it took advantage of a natural and striking rock pier in the middle of a natural cascade. The bridge was immediately considered a major accomplishment, because it was the first to provide spans longer than possible with simple beams.

Hale's bridge was not an isolated case. Many old bridges took advantage of natural features. Figure 21 shows a stone abutment that should last long after the bridge.


The First Covered Bridge in the United States

Another American bridge pioneer was Timothy Palmer. He was an extraordinarily energetic, talented, and prolific bridge builder who experimented with progressively flatter structures that relied less on arch action. The bridges built by Palmer through his career consistently used more panel braced timber frames in configurations that can be identified as trusses. After constructing several large bridges, Palmer sought and gained approval to span the Schuylkill River at Philadelphia, PA. His resulting structure was substantially different from earlier bridges built at the same spot, and included three spans (two of 45.8 m (150 ft)), and one of (59.4 m (195 ft)) without struts from below. The trusses were built of heavy timber members with bracing, and the bridge was completed in 1805 or 1806, depending on the source. The bridge was expensive and critical to ongoing commerce, so it was enclosed with sides and a roof to protect it from weathering, leading to its name the Permanent Bridge. Although there are hints of even earlier covered bridges in the United States, this bridge is most often cited as the first.

Patents and Covered Bridges

The United States established its first patent office in 1790. Tragically, for the purposes of historical research, a fire destroyed this office in 1836 with the loss of all patent records to that date. Efforts were made to restore as many of the patents as possible, yet many remain lost forever. Hence, any definitive statements of fact regarding the earliest patents related to the developments of timber trusses and covered bridges are suspect. Not surprisingly, some historians have made heroic efforts to compile as many of the lost pieces as possible. Richard Sanders Allen deserves special recognition for his compendium of "Thirty-Two Lost Bridge Patents." As his title suggests, even just the recovered patent variations alone are too numerous to fully describe in this manual. In an ongoing effort to focus on the surviving authentic examples of North American covered bridges, the following discussion includes only the more prominent developments.

Early North American bridge builders actively pursued patents for their designs in an attempt to gain more bridge construction contracts. A few of the very first patents involved general bridge construction, but by 1797, there were several that involved specific schemes for timber arches. Among others, Timothy Palmer received a patent that year, the details of which remain unknown, but he began construction of his Permanent Bridge only a few years after this, his initial patent.

Theodore Burr obtained the first of his many patents in 1804 or 1806, (again, according to the source), which regrettably remains among the unrecovered records. His second patent was issued in 1817. Burr's trademark design dates from this patent. He extended curved lower ribs that had reached only bottom chords, up along the trusses, all the way to the top chord. This superposition of arch and truss forms seems to have been influenced by earlier bridges built in Switzerland. The resulting structure has been described as a combination of conventional trusses (parallel chords with compression diagonals) and supplemental arches. One of Burr's early examples of this bridge form, and probably the basis for his 1817 patent, was his Union Bridge crossing of the Hudson River between Lansingburgh and Waterford, NY, circa 1804. This was a significant structure; 244 m (800 ft) long, with four spans. The structure was rebuilt after being in service for some time, to include a roof and siding. This heavily braced and counterbraced structure exemplified what today is called a Burr arch.

Lewis Wernwag was born in Germany in 1769 and obtained a patent (which is also lost) in 1812. The patent most likely described a structure similar to his crossing of the Schuylkill River at Philadelphia, PA's Upper Ferry. The huge 104-m (340-ft) trussed arch span was quickly termed the "Colossus" and represented a major triumph in bridge construction, with its attractive and apparently efficient use of timber, supplemented with iron rod bracing members. Wernwag owned a metal works company and relied more on early forms of metal connections and components rather than on traditional timber joinery only. He received a second patent in 1829 for improvements in his structure. Regrettably, the bridge was lost to fire in 1838.

Ithiel Town (1784--1844) of New Haven, CT, was a prominent architect known for designing many types of buildings. He also planned many bridges, initially experimenting with various truss arch combinations. However, Town wanted to devise a structure that would require fewer carpentry skills than was required by the intricate joinery details of some of the early bridges. Using only planks joined with round wooden pegs, he began developing a lattice style of truss construction and obtained his first patent in 1820. He was nearly as good a promoter as an inventor, and the lattice truss became very popular, although it has been criticized for its apparent waste of material. This truss layout proved to be very adaptable. It could include heavier members for longer spans, and could even be doubled up to include two layers of web members and three layers of chords for heavy loads, such as those generated by the railroads. A few of his bridges were built with such heavy members that they became identified as a timber lattice, as compared with the more common plank lattice. The most famous of the surviving timber lattices is found in the Windsor, VT-Cornish, NH, covered bridge over the Connecticut River, which remains one of the longest two-span covered bridges in the United States.

Stephen Long (1784--1864) had a varied background and career. He gained his experience as a timber bridge builder while serving in the U.S. Army. Long was commissioned to locate, plan, and build the Baltimore and Ohio Railroad. He chose to use a standardized truss for all his spans, with timber counterbraces in all the panels. With the addition of timber wedges at the bearing joints between the posts and diagonals, he found that he had better control over the trusses' as-built geometry. He obtained his first bridge patent in 1830. Subsequent printed materials pronounced that these wedges allowed the truss builders to induce member forces in the trusses that effectively prestressed the structure, to employ today's terminology.

William Howe (1803--1852) made a major contribution to the evolution of timber covered bridges by being the first to use metal components as primary members within an otherwise timber truss. He used parallel timber chords, with timber diagonals and counters in the panels, but he used round iron rods for the vertical tension members. The threaded rod ends allowed easy adjustment of the structure, to keep it tight both during and after erection. Many modifications were made over the years to Howe's original design to address various desired details, but his truss was quickly adopted to withstand the heavy loads on railroads. The popularity of the Howe truss continues today. It is often selected when constructing new covered bridges. Howe's modification was a major reason for the short life and reduced popularity of Stephen Long's truss-which was essentially the same, but without the iron rod verticals.

Prevalence, Prominence, Demise, and Resurgence

His 1978 book, Last of the Covered Bridge Builders, is a fascinating collection of stories and information from his many years spent rehabilitating existing covered bridges and constructing new examples.

There are many examples of authentic, but new, covered bridges built in the last few decades of the 20th century. Interestingly, of the 30 States that currently have covered bridges, more than half have built a new covered bridge within the past 30 years; some have built several. Although some owners, engineers, contractors, and even bridge users have distinct preferences for specific truss types, these newer bridges have used nine different truss types.


Chapter 18. Historic Considerations With Existing Structures

A number of agencies are involved with regulating and providing guidance for historic preservation. Proposed work on covered bridges must satisfy each of these entities and conform to several aspects of historic preservation. Further, owners of covered bridges are often especially protective of these treasures, as evidenced by the sign in figure 168.


National Register of Historic Places

One of the most important historic preservation issues for work with covered bridges is the National Register of Historic Places. A good reference for information about this topic is provided on the National Park Service's Web site,

Not all historic covered bridges are listed on the National Register, but most people involved in this aspect of historic preservation believe that most extant covered bridges are eligible to be listed, meaning that they would be listed if the effort were made to follow the application process. Therefore, whether a particular bridge is on the National Register or not does not matter, because most owners and/or funding agencies do not advocate destroying historic covered bridges.

Being listed on the National Register carries with it limitations regarding what actions are possible when considering rehabilitation. There are certain preferred means and methods to rehabilitating historic covered bridges, as described later in this chapter in the section, "Secretary of Interior's Standards for Historic Preservation."

The National Register's listing process is provided on the National Parks Service Web site. There is also a Register of Historic Places in most States, in addition to the National Register. The distinctions between being listed on one or both depends on the State.

Historic American Engineering Record

Established in 1969, in cooperation with ASCE and the Library of Congress, the Historic American Engineering Record (HAER) is a National Park Service program responsible for compiling a national archive of America's engineering, industrial, and technological achievements of historic interest. The collection contains nearly 3,500 sheets of measured and interpretive drawings, 72,000 large-format photographs, 61,000 data pages, and 1,000 color transparencies on more than 7,000 sites, structures, and objects. The collection is curated and made available to the public by the U.S. Library of Congress; it is also one of the first available online as part of the National Digital Library ( The collection includes documentation on more than 100 covered bridges.

HAER, in cooperation with FHWA, is conducting a 3-year project to complete documentation on a selection of America's outstanding covered bridges. In addition to HAER documentation, it is proposed that a selection of bridges will be nominated as National Historic Landmarks, a traveling exhibit is planned, a best practices workshop was held at the University of Vermont, June 5-7, 2003, and a Web page for covered bridges is planned.

State Historic Preservation Offices

In practice, there are varying degrees of interest, knowledge, and interpretation of National Register issues in the various State Historic Preservation Offices. For example, some may believe that it is acceptable to allow floor systems to be removed from extant bridges and replaced by an independent floor system, while others would not tolerate such an act. Therefore, it is important to coordinate with the State office for the project under consideration. Further, it is particularly important to initiate contact with that office early in the project to identify special interests or concerns.

Some projects include unusual features deserving special consideration by the State office, like these ornate timber gates used to restrict this bridge to pedestrians and bicycles (see figure 169).


The Advisory Council on Historic Preservation

The Advisory Council on Historic Preservation is an independent Federal agency that provides a forum for influencing Federal activities, programs, and policies as they affect historic resources. The goal of the National Historic Preservation Act (NHPA), which established the Council in 1966, is for Federal agencies to be responsible stewards of our Nation's resources when actions affect historic properties. The Council is the only entity with the legal responsibility to balance historic preservation concerns with Federal project requirements.

As directed by NHPA, the Council:

•Advocates full consideration of historic values in Federal decisionmaking.

•Reviews Federal programs and policies to promote effectiveness, coordination, and consistency with national preservation policies.

•Recommends administrative and legislative improvements for protecting our Nation's heritage with due recognition of other national needs and priorities.

The Advisory Council normally is not involved directly in establishing the goals of a particular covered bridge preservation project. Instead, the State Historic Preservation Office coordinates with the Council as necessary.

Some historic structures have been converted for unusual use, leading to interesting evaluations of proposed projects. Figure 170 depicts an example of what some historic covered bridges are used for after serving their useful life for vehicular traffic. This is the museum inside the Shushan Bridge in Washington County, NY.


U.S. Secretary of Interior's Standards for Historic Preservation

The U.S. Secretary of the Interior developed "Standards and Guidelines for Archeology and Historic Preservation" under the authority of Sections 101(f), (g), and (h), and Section 110 of the National Historic Preservation Act of 1966 ( These standards and guidelines are not regulatory and do not set or interpret Agency policy. They are intended to provide technical advice about archeological and historic preservation activities and methods.

A good reference for these standards is summarized on an Internet site hosted by the Advisory Council on Historic Preservation at: An important distinction is made among various types of anticipated preservation work-rehabilitation, reconstruction, and restoration.

As taken directly from that site:

Rehabilitation (treatment)-the act or process of returning a property to a state of utility through repair or alteration which makes possible an efficient contemporary use while preserving those portions or features of the property which are significant to its historical, architectural, and cultural values.

Reconstruction (treatment)-the act or process of reproducing by new construction the exact form and detail of a vanished building, structure, or object, or any part thereof, as it appeared at a specific period of time.

Restoration-the act or process of accurately recovering the form and details of a property and its setting as it appeared at a particular period of time by means of the removal of later work or by the replacement of missing earlier work.

In brief, restoration and reconstruction are preservation actions that focus on the preservation with less consideration of the structural needs of the work. Rehabilitation allows less intensive preservation and more consideration of structural needs.

For covered bridges that vehicles continue to use, the structural demands often require strengthening that alters the original construction. That type of work usually involves rehabilitation rather than restoration. Consequently, rehabilitation may result in retaining less original fabric.

There is controversy as to what makes a covered bridge historic. Some believe a bridge is important because it is a physical relic and the material is historic. Some believe that a covered bridge is historic because it embodies a special idea or concept. As noted elsewhere in this manual, various components of an historic covered bridge have probably been replaced at least once during its life (e.g., roofing, siding, flooring); hence, some believe that replacing those items in subsequent repair projects is acceptable. Others place more emphasis on replacement in-kind and only when necessary.

The decision regarding the type of preservation treatment for a given bridge is, therefore, complex and should be made in consultation with the various stakeholders involved-owners, engineers, and State Historic Preservation Officers (or designated representatives), at a minimum.

The engineer's input is vital to establish what work may be necessary for a given desired end. Including one or more local contractors who are experienced with authentic covered bridge work is critical in the early planning process to ensure the project's success. Contractors interested in the work on the particular project may not want to jeopardize their potential award of the work due to a perceived conflict of interest.

The National Park Service also sponsors the Historic Preservation Training Center (HPTC) in Frederick, MD. The HPTC is devoted to preserving and maintaining historic structures. The center is working to produce best practices guidelines for covered bridges based on the U.S. Secretary of the Interior's Standards.



Blues Music

The South: Negro Music
2003 Piero Scaruffi

While we will never know for sure, it is likely that music originally developed (thousands and thousands of years ago) as a means to coordinate and synchronize collective human movement, such as for hunting or farming. Even today, it comes natural to start singing a rhythmic song to accompany the activity of a group of people, whether hiking in the mountains of building a roof.
Presumably, great singers held an important social status just like shamans or top hunters. Later, as percussion instruments developed to accompany music, individual percussionists may have also emerged. Then new kinds of instruments, not only percussive, emerged that further enabled virtuoso playing.
Sometimes during the evolution of civilizations, "solo music" was invented to admire and appreciate the music of the best singers and instrumentalists. It is likely that, initially, their performances were mainly for the aristocracy and were purely musical. At some point it came natural to merge solo music and solo poetry to entertain the aristocracy (and later the masses) with stories that people were familiar with. During the classic age of Greek theater, these stories became more abstract and metaphorical, and the music became less straightforward. Christianity further bent the purpose of music to sing the praise of the Lord and to call the faithful to prayer. Music, basically, became the vehicle for a message. The message (even when it was an epic) was not just a story, but a whole ideological system.
At some point ordinary people started creating songs for their own consumption, or "folk" songs. These songs were about the joys and sorrows of rural life.
The music for the aristocracy became more and more sophisticated, both because it could buy the best instruments on the market and because it could hire the best singers and instrumentalists in the kingdom. It came to be called "classical" music. Through the invention of polyphony, it greatly reduced the emphasis on rhythm, which came to be considered a rather primitive and plebean element.
On the contrary, folk music relied heavily on rhythm, both for dancing and for singing.
Rhythm became, in a sense, the main discriminant between classical and folk music.
That was the situation when European music (both classical and folk) arrived to the Americas. In the melting pot of the Americas, Europeans were forced to admit for the first time that there were many different kinds of folk music. While the racial instinct was to separate the western European forms (and the Anglosaxons in particular) from the others, it was only a matter of centuries before the boundaries were blurred. The most traumatic confrontation for Europeans was the existence of African music. Long discarded as an oddity of the animal kingdom (pretty much like the sounds of animals), African music managed to coexist for two centuries next to European music before making inroads into white American society. During the 19th century several elements of African music began to percolate into white folk music. (This phenomenon took place in the Americas. No Afro-contamination took place in European society until much later).
Again, rhythm was the key discriminant factor. Rhythm was not an African invention, but certainly the African polyrhythms were wildly different from the linear rhythms of European folk music.
The effect of African music on white music was initially barely felt, but it was going to become the main factor fueling innovation. In fact, the folk music of Europeans had barely changed at all over the centuries, but was going to change dramatically (with changes picking up faster and faster speed) once African-American music became to influence it.
The fusion of European folk music with African folk music was the most important source of innovation for music in the western world after the Ars Nova.
The status of European classical music remained a bit odd. It steadfastedly refused to accept African music (still regarded as some form of inferior animal expression) and all its mulatto offspring. Thus the gap between classical and folk music increased dramatically during the 19th century until the Sixties.

Negro Music: the African Perspective

"African" music is actually quite a pointless term. Music varies across Africa much more than it does across Europe (precisely because no single musical culture came to dominate and spread across the continent). Most slaves traded with the Americas came from West Africa, whose music was completely different from the music of other parts of Africa. It was also quite different from the way European music had developed since Greek times.

If the core of European music was to embellish a melody via the counterpoint of a number of melodic instruments, and incidentally set it to a rhythm (which was sometimes specified only in vague terms such as "adagio" or "allegro"), the core of West African music was to color a rhythm via the counterpoint of a number of rhythmic instruments, and incidentally dress it up with a melody. Thus the key elements of West African music were rhythm and timbre, not melody and harmony. Instead of melodic counterpoint, West African music was about rhythmic counterpoint.

Just like European melodism was an extension of the Indo-European language, West African percussionism turns out to be an extension of the West African languages, which are mostly based on timbre and rhythm. West African percussive music was nothing but a simulation of the spoken language. In a sense, West Africans learned how to play music (the music in which rhythmic and timbric subtleties play a key factor) while they were learning to speak. West African percussive music had the same "semantic" value of European melodic music, except that the axis of meaning was perpendicular.

Initially the European colonists of the North America had no intention of converting the slaves to Christianity: the fact that the slaves were "pagans" was the moral justification for slavery. They were not "Christians", and in those days "Christian" meant "human". People who were not "Christian" were inferior beings. The Methodist and Baptist revival that started in 1734 with the "Great Awakening" of Massachusetts created a new ideology of slavery: slavery was justified because it was a means to save the pagans from certain damnation. Therefore the conversion of pagans slowly became not only welcomed but even mandatory. Slavery came to be viewed (in fanatically religious quarters) as a crusade for saving souls. The "spirituals" (spiritual hymns) were the first original form of music created by the slaves of North America. The canon developed via the adaptation of African rituals to Christian rituals and via the adaptation of European liturgical music to the musical system of West Africa. Needless to say, the development of "negro" spirituals picked up speed tremendously when the first black preachers started practicing, because then the preacher and its audience would simply turn their "call and response" relationship into musical interaction. Because blacks were segregated from whites, they had to be given their own preachers (often slaves themselves), who would preach to a black-audience only. In the 1750s black preachers were already ubiquitous. Black congregations were formed in the 1770s.

A scale is the ordered sequence of notes used in a musical system. European music used the diatonic scale (divided into eight tones, the eighth being a repetition of the first tone an octave higher), or, better, its extension, the chromatic scale (twelve tones per octave). West African music used a pentatonic scale (that comprises only the first, second, third, fifth, and sixth tones of a diatonic scale). Two scales developed by the merge of European and African music: the deviant pentatonic scale of "spiritual" music and the expanded diatonic scale of "blues" music. All of black music in the USA would develop from these two fundamental scales. The black folk music that was more closely related to its West African roots was the work song.

In 1776 the USA declared their independence from Britain.

Negro Music: the European Perspective
2003 Piero Scaruffi

The Atlantic slave trade, started by the Portuguese in the 16th century and turned into the engine of North American growth by the British in the 18th century, left the newly born USA with its most embarrassing legacy: one million slaves. By the time of the Civil War, they had increased to more than four million.

The African population posed a moral dilemma to the very religious crowds of European colonists: how to turn the African pagans into good Christians. The missionaries who took on that crucial task were the first white folks to realize the outstanding musical talent of the black race. Where they came from, music was a social phenomenon that accompanied every activity. The same was roughly true of white folk music, but that music survived mainly in poor rural communities. The rich white plantation owners had adopted the stifled musical habits of their European counterparts (music as a formal event), thus repudiating music as a commentary on daily life. The Africans of the plantations hung on to their traditions, and the missionaries found it convenient to adapt the Christian liturgy to the musical mind of the Africans. It became normal for black congregations to accompany sacred ceremonies with music that was, de facto, imported from Africa. For example, the polished, linear vocal harmonies of European singing were replaced by syncopated vocal harmonies with all sorts of rhythmic subtleties. This "spiritual" music was the first instance of African music adapted to the social environment of the New World (in this case, the church, something that did not exist in Africa, and the lyrics of the Gospels). It was not difficult for the individual slave to identify with the martyrdom of Jesus, and for the community as a whole to identify with the odyssey of the Jews.

The other kinds of musical expression, mainly work songs (by "hollers" and "arhoolies", i.e. cotton and wheat pickers) and party dances, were closer to the original music of Africa, because the same activities (work and party) existed in Africa. Go Down Moses is an example of "jubilee song", songs for the "jubilees", or plantation parties. "Hollers" and "arhoolies" (workers of, respectively, cotton and wheat plantations) developed work songs that were synchronized with the rhythm of work.

All three kinds of music (religious, work and party) shared the same characteristic: they were basically hypnotizing both the singer and the listeners. Whether ecstatic, mournful or exuberant, the music of the Africans tended to be repetitive, rhythmic and deeply felt. Its "hypnotic" effect perhaps expressed the resigned acceptance of a tragic destiny. At the same time, whether ecstatic (religious), mournful (work) or exuberant (party), it was much more emotional than white folk music; a fact that perhaps expressed the hope of a less tragic future. This emotion led to individual improvisation over collective themes. The combined effect of the hypnotic format and the emotional content created loose structures that could extend for indefinite periods of time, in a virtually endless alternation of repetition and improvisation.

Three more aspects of black music were innovative for the standards of white music. The rhythm was generally syncopated, and (at the beginning) only provided by hand clapping and foot stomping. The singer employed a broad vocal range and bridged notes in an acrobatic manner, thus introducing a freedom unknown to western harmony. The black equivalent of counterpoint was mostly implemented in the "call and response" format: a leader intoned a melody and a choir repeated it in a different register, and sometimes a different tempo, and often bending the melody slightly. The role of spontaneous improvisation in black music clearly contrasted with the clockwork precision of western harmony. And the open-ended structure of black music contrasted with the linear progression of western music.

Originally, slave music was purely vocal. Many blacks of the plantations were skilled fiddlers, but that was a job they mostly performed for the white masters, not for their own community. They played the music for the dancing parties of their masters.

The African heritage was mainly preserved in the South. The blacks of the North were much better integrated in white society in the 19th century. For example, the first black theater had opened in New York already in 1821 (the "African Grove", at the corner of Bleecker and Mercer, part of the Greenwich Village, which was then a bit outside New York proper). Francis Johnson was a respected composer of orchestral music in Philadelphia (he performed the first "concert a` la Musard" in the USA in 1838). And Elizabeth Greenfield, also in Philadelphia, became a respected concert vocalist in 1851. It was in the South that the blacks, barred from integrating in the white society, had to "content" themselves with their African traditions.

Theoretically, the civil war that ended in 1865 freed the African slaves, and, in fact, the first collection of black songs was published shortly afterwards, Slave Songs of the United States (1867). In practice, it did little to improve the condition of the black mand: same job, same discrimination. Even for the blacks who left the Southern states, the cities of the North promised freedom, but mostly delivered a different kind of slavery. On the other hand, the end of slavery meant, to some extent, the dissolution of the two traditional meeting points for the African community: the plantation and the church.

Music remained the main vehicle to vent the frustration of a people, but the end of slavery introduced the individual: instead of being defined by a group (the faithful or the workers), the black singer was now free to and capable of defining himself as an individual. His words and mood still echoed the condition of an entire people, but solo singers represented a new take on that condition, the view of a man finally enabled to travel, and no longer a prisoner of his community, although, sometimes, more lonely. The songs of a black person were the diary of his life (road, train, prison, saloon, sex), often an itinerant life, as opposed to the diary of a community (plantation, church).

Solo singers needed instruments. The banjo, an African instrument ("banhjour"), came on the ships. The guitar and the harmonica were adopted from the whites. Eventually, the guitar came to be the second "voice" of the bluesman. Instead of addressing an audience in a church or plantation, and interacting with it, the black songster was interacting with his guitar. The blues became a dialogue between a human being and his guitar. The itinerant black "songsters" of the time of the Reconstruction, armed with the guitar, adapted the songs of the hollers to the narrative format of the British ballad (for example, John Henry).

Although they were similar in tone, the difference between black and white folk music was profound. They were both realist, but white folk music created "epics" out of ordinary events, while the "blues" was almost brutal in its depiction of real life. The landscape of the blues was one of prisons (Midnight Special) and dusty roads. "Love" was simply sex, not a romantic emotion. Death was a fact of life, not a step towards eternal life. On the other hand, the existential quality of the music was stronger in the blues. The blues was, first and foremost, a state of mind. No matter how direct, death and sex ultimately harked back to prisons and saloons, which in turn harked back to poverty and misery. The unbridled materialism of the blues was not self glorification but self pity. The blues was, fundamentally, the sense of an unavoidable fate (both individual and collective).

The quintessence of the blues was pain, but the art of the blues often consisted in bridging the chasm between tragedy and (broadly speaking) comedy.

Musically, blues music is twelve-bars long in 4/4 time (although this may have been a later development). Its melody is shaped by a scale that is an adaptation of the African five-note scale to the western seven-note scale. Blues music introduced two "flattened" notes, the "blue" notes.

Black music was originally meant as music for blacks only, not only ignored but often despised by the white community. The demographic movement of the economic boom that followed the reconstruction after the Civil War helped export black musicians and their music to white cities, and tear down some of the cultural walls between the two communities.

By far, the elements that sounded most outrageous to white ears were the obscenity of the lyrics and the indecent movements. Sex was the dominant theme of "negro" ballads, and the lyrics were often explicit. Black songsters liked to boast about their sexual performances. This was not so much an African tradition as a plantation tradition: the slave holders used to encourage extramarital intercourse among slaves, because Thus black people came from environments in which sexual promiscuity was more than tolerated: it was ordinary life. The other "indecent" element was the Christian ceremonies that looked more like pagan ceremonies, in which loud and inebriating singing mixed with hysterical dancing and orgasmic howling. Black churches encouraged the exhibition of mystic fervor through savage body language, but white folks saw it as evidence that blacks were not civilized beings.

As blues music was heard and "consumed" by white folks, it became more aware of its own meaning. It also had to somehow "hide" that meaning (e.g., the sexual one), that was not compatible with the values of white society. Thus the bluesmen developed indulged in "double talk" to confront themes that white people shunned. The blues became more metaphorical and allegorical (Bollweavil Blues, Stewball, Uncle Rabbitt, The Grey Goose).

As ghettos sprouted up in all big cities, the topics of blues music adapted to the urban landscape, and began to depict life in the ghetto. But blues music was never meant to reflect the rhythm of urban life. De facto, the ghetto remained unsung till the 1970s, when rap was born.

The first venue for black music was the "medicine show", the itinerant variety show that accompanied the "doctors" in their quest for gullable customers (thus the slang term "physick wagon"). The "doctors" used black musicians, actors and dancers as cheap entertainment to draw an audience to their sales pitches. Eventually, the "medicine show" became an art in itself, that toured several counties and even states, often augmented with magicians, acrobats, etc.

In Memphis in 1907 the first permanent theater for medicine shows was set up by Fred Barrasso. This led to the formation of the T.O.B.A. ("Theater Owners Booking Association"), a network of theaters specializing in "negro" shows. Those black musicians, abused and underpaid by their employers, were nonetheless the first black professional entertainers.

Minstrel shows, although run by white entertainers, began to hire black singers after the Civil War, and eventually became mainly black. White enterpreneur John Isham organized the first itinerant black revue (basically, a better organized minstrel show), "Jack's Creole Burlesque Company", in 1890. One such revue even toured Europe in 1897. These revues maintained the three-part format of the minstrel show (opening skit, specialty acts and finale), but were, for all practical purposes, variety shows with orchestras and choirs.

New York: the Birth of a Black Nation
2003 Piero Scaruffi

The turmoil in music reflected the emergence of black intellectuals that challenged the stereotypes of white culture. At the end of the Civil War, the biggest problem faced by the USA was how to deal with the millions of uneducated blacks, who were still dependent on white people for their livelihood. For example, in 1867 a white abolitionist of Nashville (Tennessee), Clinton-Bowen Fisk, founded Fisk University with the aim of educating the former slaves and their children. After the death of Frederick Douglass, the only major black figure of the abolitionist era (an escaped slave who supported both John Brown and Abraham Lincoln), Booker-Taliaferro Washington, the son of a Virginia slave, became the leading black intellectual of the Reconstruction era. He believed that education would give blacks a chance in the American society. In a 1895 speech, he called on blacks to accept segregation and to invest in their future, so that some day blacks would be equal to whites. But a decade later along came William-Edward-Burghardt DuBois, who instead organized the "Niagara Movement" in 1905 with the explicit aim of creating a platform to fight segregation. When, in 1909, several white and black activists founded the "National Association for the Advancement of Colored People" (NAACP). Du Bois became one of its leaders. The problems faced by the black community in those days were quite basic: white communities were expelling and lynching blacks by the hundreds (at the peak, in 1892, more than 200 blacks were lynched in one year). In 1916, Jamaica-born Marcus Garvey moved to New York and lauched a new black nationalist and separatist movement. Unlike his predecessors, he believed that black civilization was actually superior to white civilization, and that blacks should return to Africa.

Thanks to the efforts of the previous decades in educating blacks, the 1920s witnessed a "Harlem Renaissance", led by blacks such as poet Langston Hughes. Music was only one realm in which black culture was being accepted during the 1920s.

The commercial recording of black music was a direct consequence of this "black renaissance". Realizing that black artists were becoming a lucrative business (Scott Joplin in ragtime, William Handy in blues, Eubie Blake in pop, Louis Armstrong in jazz), and that record labels were still reluctant to let black artists make records, Atlanta's black songwriter Harry Pace (a former partner of William Handy) opened in Harlem his own label, "Pace Phonograph Company" (later "Black Swan Records"), in 1921, employing a young Fletcher Henderson as the studio pianist. Pace's success was such that white-owned labels such as Paramount (Alberta Hunter, Ida Cox, Charley Patton, Blind Lemon Jefferson) and Columbia (Bessie Smith, Ethel Waters) started competing fiercely for black recording artists, and that in 1924 Paramount bought the Black Swan catalog altogether. Black Swan's brief adventure legitimized the black recording artist, and opened the floodgates to the recording of black music throughout the country.

New Orleans, Kansas City, Memphis
2003 Piero Scaruffi

The urban development of black music in the 20th century owed a lot to the sin cities of the south: New Orleans, Kansas City and Memphis. Their saloons, clubs, brothels, steamboats and speakeasies sponsored countless black musicians who migrated from the countryside.

New Orleans, at the mouth of the Mississippi river, the old French city that had exhibited an amoral opulence before the Civil War, was a melting pot with no equals in the south (Blacks, Italians, Caribbeans, French-speaking white and black Creoles, native Americans, Mexicans, and descendants of the Europeans). Its port was an infinite source of cultural exchanges with the rest of the world. Like most seaports, New Orleans boasted a colorful night life of prostitution, gambling and entertainment ("dixies"); and the "laissez faire" (laid-back) attitude of the Caribbean-French population made it even more tolerant than most seaports. Untouched by the industrial revolution and less socially stressed than other plantation-oriented economies, New Orleans was able to retain the traditions of the various ethnic groups while they were rapidly being annihilated in the rest of the USA. Exoteric rituals, tribal dances, pagan festivals, funeral marches and all sorts of parties continued to exist well into the 20th century. Its "Mardi Gras" carnival was a hybrid musical celebration that mixed African, French and Native traditions in its colorful parades and marching bands. New Orleans, a commercial city, was more tolerant towards the blacks than the other southern cities. When the blacks were emancipated, it was a much friendlier place to be for a black musician than most of the South. In 1897 the puritan government of the city had created "Storyville", the red-light district, nicknamed after the politician who had the idea, a district that quickly became a city within the city. Since most establishments had a musician entertaining the customers, "Storyville" became the biggest employer of black musicians outside of Broadway. When "Storyville" was shut down in 1917, black musicians spread all over the country, bringing with them bits and pieces of New Orleans' sound. One of New Orleans' bands, the Original Creole Band, exported a new kind of music that would be called "jazz".

Kansas City had experienced its first wave of black immigrants after the disputed presidential elections of 1877, that basically killed any remaining hopes of sincere black integration in the South. Blacks from states such as Louisiana and Mississippi emigrated by the thousands towards more tolerant places such as Kansas City. During the corrupt reign of Tom Pendergast (from 1925 till 1939, when he was convicted of tax evasion), the illegal clubs of Kansas City flourished, virtually mocking the "Prohibition" of alcohol (1920-33). The booming industry of alcohol and gambling turned out to be a bonanza for black musicians, who became the backbone of the entertainment machine.

Memphis, an important inland port on the Mississipi and an important railway node between New York and Chicago, made wealthy by the cotton industry, was the natural link between the rural South and the industrial North. Memphis was often the first step on the way out of the plantations for the blacks who wanted to migrate north. Many of them ended up playing or singing on Beale Street, the center of the night life. When nylon replaced cotton, Memphis began to decay, and blacks joined the mass migration towards Chicago, the next major stop on the railway.