Various History

Discussion in 'History' started by StrangerInAStrangeLand, Jun 17, 2014.

  1. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    Interstate Highway System

    Please Register or Log in to view the hidden image!



    On June 29, 1956, President Dwight Eisenhower signed the Federal-Aid Highway Act of 1956. The bill created a 41,000-mile “National System of Interstate and Defense Highways” that would, according to Eisenhower, eliminate unsafe roads, inefficient routes, traffic jams and all of the other things that got in the way of “speedy, safe transcontinental travel.” At the same time, highway advocates argued, “in case of atomic attack on our key cities, the road net [would] permit quick evacuation of target areas.” For all of these reasons, the 1956 law declared that the construction of an elaborate expressway system was “essential to the national interest.”

    “The Last Call of the Wild”

    Today, there are more than 250 million cars and trucks in the United States, or almost one per person. At the end of the 19th century, by contrast, there was just one motorized vehicle on the road for every 18,000 Americans. At the same time, most of those roads were made not of asphalt or concrete but of packed dirt (on good days) or mud. Under these circumstances, driving a motorcar was not simply a way to get from one place to another: It was an adventure. Outside cities and towns, there were almost no gas stations or even street signs, and rest stops were unheard-of. “Automobiling,” said the Brooklyn Eagle newspaper in 1910, was “the last call of the wild.”

    Did You Know?
    At 3,020 miles, I-90 is the longest interstate highway. It connects Seattle, Washington, with Boston, Massachusetts.

    A Nation of Drivers

    This was about to change. In 1908, Henry Ford introduced the Model T, a dependable, affordable car that soon found its way into many American garages. By 1927, the year that Ford stopped making this “Tin Lizzie,” the company had sold nearly 15 million of them. At the same time, Ford’s competitors had followed its lead and begun building cars for everyday people. “Automobiling” was no longer an adventure or a luxury: It was a necessity.

    A nation of drivers needed good roads, but building good roads was expensive. Who would pay the bill? In most cities and towns, mass transit–streetcars, subways, elevated trains–was not truly “public” transportation. Instead, it was usually built and operated by private companies that made enormous infrastructural investments in exchange for long-term profits. However, automobile interests–such as car companies, tire manufacturers, gas station owners and suburban developers–hoped to convince state and local governments that roads were a public concern. That way, they could get the infrastructure they needed without spending any of their own money.

    Their campaign was successful: In many places, elected officials agreed to use taxpayer money for the improvement and construction of roads. In most cases, before 1956 the federal government split the cost of roadbuilding with the states. (One exception was the New Deal, when federal agencies like the Public Works Administration and the Works Progress Administration put people to work building bridges and parkways.) However, this funding arrangement did not get roads built fast enough to please the most ardent highway advocates.

    The Birth of the Interstate Highway System

    Among these was the man who would become President, Army General Dwight D. Eisenhower. During World War II, Eisenhower had been stationed in Germany, where he had been impressed by the network of high-speed roads known as the Reichsautobahnen. After he became president in 1953, Eisenhower was determined to build the highways that lawmakers had been talking about for years. For instance, the Federal-Aid Highway Act of 1944 had authorized the construction of a 40,000-mile “National System of Interstate Highways” through and between the nation’s cities, but offered no way to pay for it.

    The Federal-Aid Highway Act of 1956

    It took several years of wrangling, but a new Federal-Aid Highway Act passed in June 1956. The law authorized the construction of a 41,000-mile network of interstate highways that would span the nation. It also allocated $26 billion to pay for them. Under the terms of the law, the federal government would pay 90 percent of the cost of expressway construction. The money came from an increased gasoline tax–now 3 cents a gallon instead of 2–that went into a non-divertible Highway Trust Fund.

    The new interstate highways were controlled-access expressways with no at-grade crossings–that is, they had overpasses and underpasses instead of intersections. They were at least four lanes wide and were designed for high-speed driving. They were intended to serve several purposes: eliminate traffic congestion; replace what one highway advocate called “undesirable slum areas” with pristine ribbons of concrete; make coast-to-coast transportation more efficient; and make it easy to get out of big cities in case of an atomic attack.

    The Highway Revolt

    When the Interstate Highway Act was first passed, most Americans supported it. Soon, however, the unpleasant consequences of all that roadbuilding began to show. Most unpleasant of all was the damage the roads were inflicting on the city neighborhoods in their path. They displaced people from their homes, sliced communities in half and led to abandonment and decay in city after city.

    People began to fight back. The first victory for the anti-road forces took place in San Francisco, where in 1959 the Board of Supervisors stopped the construction of the double-decker Embarcadero Freeway along the waterfront. During the 1960s, activists in New York City, Baltimore, Washington, D.C., New Orleans and other cities managed to prevent roadbuilders from eviscerating their neighborhoods. (As a result, numerous urban interstates end abruptly; activists called these the “roads to nowhere.”)

    In many cities and suburbs, however, the highways were built as planned. All told, the Interstate Highway System is more than 46,000 miles long.
     
    Last edited by a moderator: Jun 29, 2014
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    History of the US Highway System

    Please Register or Log in to view the hidden image!



    From Dirt Paths to Superhighways

    Before the Interstate Highway system brought fast, limited access highways to the United States, there was, and still remains, another nationwide system of highways that enabled travelers to follow standardized routes to any part of the nation. This system, known as the United States Highway System or simply as "US" highways, was the first time in history that a national standard was set for roads and highways. This system of highways existed

    This system was created by the Federal Aid Highway Act of 1925 as a response to the confusion created by the 250 or so named many named highways, such as the Lincoln Highway or the National Old Trails Highway. Instead of using names and colored bands on telephone poles, this new system would use uniform numbers for inter-state highways and a standardized shield that would be universally recognizable. The most important change was that this new system would be administered by the states, not by for-profit private road clubs. Even then, people decried the idea of giving roads numbers since they felt numbers would make highways cold and impersonal.

    The Automobile

    At the beginning of the twentieth century, automobiles were a novelty that only could be enjoyed by the very rich. Most Americans contented themselves with either using the horse and buggy or taking the railroads when they needed to go on long trips. Getting around in large cities was fairly easy due to comprehensive networks of streetcars and subways. Even though it is hard to believe today, especially in California, it was generally thought that autos would never catch on. In short, in the early part of this century, there was simply no need for a good system of roads.

    Henry Ford changed the status quo with his revolutionary production line techniques. He took the idea of standardization and applied it to creating standard parts for automobile manufacturing. Cars could be produced cheaply, although a few sacrifices had to be made. Ford once said that "you can get the Model T in any color you like as long as it's black." For the first time in history, workers in a factory could afford the products they manufactured. The Model "T" soon became a common sight throughout the United States. A testament to their popularity is that over 16.5 million were sold, a record which was not broken until 1972 when the Volkswagen Beetle surpassed that mark. Needless to say, this created a huge demand for good roads.

    The Rise of Named Highways

    Please Register or Log in to view the hidden image!



    At the beginning of the century, the supply of good roads was nowhere near the growing demand. Most roads at the time were little more than improved wagon trails. In fact, many of the major "highways" were actually vestiges of old trails, such as the Oregon Trail or Santa Fe Trail. There were paved highways, but most were cobblestone and almost all were in major cities. Good road organizations appeared to remedy this situation. The American Automobile Association and the Automobile Club of Southern California (they were separate organizations originally) were formed in California to promote better roads. Additionally, many trail associations were created to address the need of having marked interstate highways; this was the birth of the named highways. The Lincoln Highway, from New York to San Francisco was the first and by the early 1920s many highway organizations were formed which placed and promoted their own routes. By 1925 there were over 250 named highways, each with their own colored signs often placed haphazardly, a fact which created great confusion.

    Several problems arose with the named highways. The lack of a central organization to dictate the placement of interstate highways left the door open for self-serving organizations to "relocate" the famous named roads so they would pass through their cities. More frequently, though, the lack of coordination between states through which the transcontinental routes ran caused confusion since the route was often not even straight. The need for a system of standardized interstate highways had evolved.

    The Federal Government Becomes Involved

    By 1924 it became clear that a single, unified system of highways was necessary. In that year, the American Association of State Highway Officials (AASHO, today's AASHTO) passed a resolution requesting the Secretary of Agriculture (the Bureau of Public Roads was in this department at the time) to investigate the possibility of creating a system of standardized highways.

    Giving highways throughout the United States a standard numerical designation was a radical idea but at the same time fit in with other innovations at the time.

    For example, by the 1920s road building was also becoming a standardized process. Road building technology advanced in a logarithmic manner, allowing good roads to be built just about anywhere. Of course, by today's standards these roads are inadequate in all aspects, including width, sight distance, grade, etc. At the time, having a paved road going through places such as the Cajon Pass and over the Ridge Route was an incredible boon.

    The Beginning of the End

    The passage of the Interstate Highway Act in 1956 spelled the end of the California US highways as the leading. The proposed system would supplant many of the US routes with divided Interstate highways, a fact that obviated the need for them. California, since the late 1930s had been pushing for creating divided highways and a comprehensive freeway and expressway system and by the late 1950s, many of the US routes had already been converted to freeways and expressways or were slated to do so. It appears the original plan was for the Interstate Highways to be co-signed and routed with their corresponding US highway and from about 1960 to 1964 this is exactly what the Division of Highways did.

    Despite this effort, it was clear that there was no purpose in maintaining many of the old US highways. Legislation was enacted that would change the face of California's highway system. One thing that was changed was that all highways would have the same sign and legislative number. For example, US 99 was Legislative Routes 3 and 4 but was Sign Route US 99. This legislation also eliminated over half of the existing US highways and renumbered and adjusted many state highways. Portions of US 99 became Legislative Route 5 and were signed as Interstate 5, while about half of its length became Sign Route CA 99 and was Legislative Route 99 in the books.

    Almost half of the US highways in California were taken off the map. The list below shows their dispostion.
    •US 6: Shortened to Bishop and replaced by SR-1, SR-11, I-5, SR-14.
    •US 40: Eliminated and replaced by I-80.
    •US 66: Shortened to Pasadena and replaced by SR-2, SR-11.
    •US 66 Alternate: Eliminated (no state routes replaced it)
    •US 70: Eliminated and replaced by I-10.
    •US 91: Eliminated and replaced by SR-1, SR-91, I-15.
    •US 99: Initially shortened to Los Angeles and replaced by SR-111, SR-86, I-10.
    •US 101: Shortened to Los Angeles and replaced by I-5.
    •US 101 Alternate: Eliminated and replaced by SR-1.
    •US 101 Bypass: Eliminated and replaced by rerouted US 101.
    •US 299: Eliminated and replaced by SR-299
    •US 399: Eliminated and replaced by SR-33, SR-119, SR-99.
    •US 466: Eliminated and replaced by SR-46, SR-99, SR-58, I-15.

    More US highways were to be decommissioned or shortened, although most of them remained signed until their corresponding Interstate highway was completed. I've added the end date for each in parentheses.
    •US 50: Shortened to Sacramento and replaced by I-580, I-205, I-5, SR-99. (1972)
    •US 66: Eliminated and replaced by SR-66, I-15, I-40. (1972)
    •US 60: Eliminated and replaced by SR-60, I-10. (1972)
    •US 80: Eliminated and replaced by I-8. (1972 - San Diego Co.; 1974 Imperial Co.)
    •US 99: Eliminated and replaced by I-5, SR-99. (1967)

    Another major route renumbering occurred in 1972 that set in stone what the remaining US highways in California were to remain. The most significant item, to the US highway buff, is the elimination of US 395 south of Adelanto, which was replaced in whole by I-15E and I-15. It appears that initially (in 1963) there were no plans to eliminate any portion of US 395, so it would have continued all the way to San Diego with I-15 ending at I-10 in Colton. The State of California pulled off a major coup in 1972 by having unconstructed state routes 31 and 71 (slated as 6-8 lane freeways) designated as I-15. This meant that the State saved hundreds of millions of dollars by having I-15 transferred from an already existing freeway to almost entirely new alignment. This also meant that the proposed US 395 freeway south of Temecula could also be built with federal, not state dollars by giving it the I-15 designation. Consequently, US 395 no longer served a real purpose and was truncated.

    In a decade, the face of signed highways in California changed dramatically. In 1962 there was but a handful of Interstate highways and 23 US highways. In 1972, only eight truncated US highways remained with over 20 Interstate highways either completed or well on their way toward completion. In no other state has there been such a dramatic change in highway numbering and highway types.

    The Highways Today

    I have traveled over many of the old highways in California and was surprised to find out how much of the old highways still exist. Some of these highways have been easy to find, such as old US 80 in the mountains east of San Diego or US 6, the Sierra Highway north of Los Angeles. In other cases, the old highways have been actually paved over or modified, like old sections of US 99 buried under I-5. Others, such as the old sections of US 99 that go through bypassed towns, have been swallowed up, transformed to match their surroundings. Many more, such as 99W in northern California, have been relegated to the status of frontage road.

    As mentioned above, seven US highways still exist in California. These are routes: 6, 50, 95, 97, 101, 199, and 395. Three of them, routes 95, 97 and 199 have remained unchanged while US 6 has been all but eliminated, save for a short stretch between Bishop and the Nevada border. The other three, routes 50, 101, and 395 have more or less been completely transformed into modern superhighways. Old alignments have mostly been bypassed and covered over, just as they had on the Interstates. Essentially, a lot of modern US highways in California bear little resemblance to their forebears and show the evolution of highway building in California.

    The history of US highways is a reflection of the history of 20th Century America. In the 19th Century, the railroads shaped the country, enabling people to travel to and settle in distant places. However, in the invention of the automobile gave everyone unprecedented mobility. The US highway system, itself a reflection of the Progressive Era, shaped the nation by allowing easy access through standardized routes to all parts of the nation.

    Please Register or Log in to view the hidden image!

     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    The Reichsautobahnen

    Although the 1919 convoy shaped Eisenhower's views, his perspective would be supplemented years later by his observations of the German autobahn network of freeways.

    Plans for the autobahn date to the 1920's. Construction of the first segment (Cologne-Bonn) began in 1929 and was dedicated by Mayor Konrad Adenauer of Cologne on August 6, 1932. When Adolph Hitler assumed power as Chancellor of the Third Reich in 1933, he took the program over, claiming it for his own. "We are setting up a program," he said later that year, "the execution of which we do not want to leave to posterity."

    Hitler's autobahn construction began in September 1933 under the direction of chief engineer Fritz Todt. The 14-mile expressway between Frankfurt and Darmstadt, which opened on May 19, 1935, was the first section completed under Hitler. By December 1941, when wartime needs brought construction to a halt, Germany had completed 2,400 miles (3,860 km), with another 1,550 miles (2,500 km) under construction.

    As many American visitors had noted during the 1930's, the autobahn was built before the country had enough motor vehicles to justify the expense. Only the well off or powerful in Germany could afford automobiles. Hitler had highlighted this problem in a speech on March 3, 1934, at the Berlin International Automobile and Motor Cycle Show: "It can only be said with profound sadness that, in the present age of civilization, the ordinary hard-working citizen is still unable to afford a car, a means of up-to-date transport and a source of enjoyment in the leisure hours. One must have courage to face problems and what cannot be solved within one year may become an established fact within ten years . . . ."

    Hitler intended to provide a small affordable "people's car" (Volkswagen) that his people could fill the autobahn with.

    Dr. Ferdinand Porsche completed design of the vehicle in 1938. That autumn, the Nazi Party Labor Organization completed some of the construction on the assembly plant at Wolfsburg. In Hitler's full-employment economy, however, construction was delayed by the absence of workers. Benito Mussolini of Italy immediately provided 1,000 unemployed workers to Wolfsburg at Hitler's request, and more as needed.

    Over 360,000 Germans paid in full or in installments for the vehicle in advance of its production. However, in August 1939, Hitler ordered Dr. Porsche to switch the Wolfsburg plant to production of military vehicles based on the Volkswagen. With Czechoslovakia and Austria under German domination and troops ready to move into Poland, the military would have to take priority.

    In the end, none of the purchasers received a Volkswagen-or refunds as war needs dominated the country. (For information on the history of the Volkswagen before and after World War II, see Phil Patton's Bug: The Strange Mutations of the World's Most Famous Automobile, Simon & Schuster, 2002.)

    At the outset of World War II in Europe, the autobahn proved to be a key asset to Germany. The German blitzkrieg ("lightning war"), which involved massive coordinated air and ground attacks to stun opponents into defeat, was a key to the German defeat of Poland in 1939, Belgium, Luxembourg, and the Netherlands in 1940, and the Soviet Army in 1941. The highway network also enhanced Germany's ability to fight on two fronts-Europe in the west, the Soviet Union in the east.

    Germany, despite these early advantages, had initiated the war before it had the industrial base to support its military over time. The absence of plants that could be converted to military production was one of the fatal flaws of the German war effort. As historian David P. Colley explained (in The Road to Victory: The Untold Story of World War II's Red Ball Express, Brasseys, Inc., 2000), "The bulk of the German Army of World War II was largely supplied by wagon trains, even to the end, and its infantry marched or rode trains or even used bicycles." He adds that Germany employed 2.8 million horses during the war to support its mechanized divisions.

    Once the United States entered the war in December 1941, the German deficiency was accentuated because America had the industrial base to create what Colley calls "the world's most highly mobile and mechanized force." The "secret weapon," as Colley called it, was the truck. America produced 3 million trucks or truck-type vehicles for the war. With the French rail network devastated by air attacks prior to the allied D-Day invasion on June 6, 1944, the trucks, often operated by the black troops of the Red Ball Express, were the key to supplying the troops as they advanced through the French countryside.

    By the time the Allied forces reached Germany, they could take full advantage of the autobahn. E. F. Koch, a U.S. Public Roads Administration (PRA) employee who observed the autobahn in 1944-45 as a highway and bridge engineer with the Ninth Army. He and his engineering unit spent the unusually cold winter maintaining roads in Belgium, Luxembourg, and the Netherlands that, after the pounding of military vehicles and the thaw in early 1945, were in terrible shape. Conditions changed when they reached Germany in early 1945. "After crossing the Rhine and getting into the areas of Germany served by the Autobahn . . . our maintenance difficulties were over. Nearly all through traffic used the Autobahn and no maintenance on that system was required."

    As the Allies pursued the German forces across Germany, the autobahn proved invaluable, especially to the supply trucks racing behind the troops. The supply units and their vehicles, which had been run ragged in France, strained to keep up. Colley quotes Corporal Edwin Brice of the 3909th Quartermaster Truck Company (I Company) who observed on March 26, 1945, that the unit's trucks had "taken an awful beating across France," but added that "victory depends on our success in keeping troops and supplies up where they are needed. If a truck or a driver can move he or it is needed."

    In the immediate aftermath of the war, Eisenhower was the military head of occupied Germany. Writer Phil Patton pointed out in Open Road that in this capacity, "Eisenhower oversaw the 'debriefing' of the Reich, the creation of a series of reports that included close study of the Autobahns."

    The autobahn was a rural network, without segments into and through Germany's cities. This seemed appropriate to Eisenhower, but in Washington, Thomas H. MacDonald and Herbert Fairbank of the U.S. Public Roads Administration (the name of the Federal Highway Administration's predecessor during the 1940's) saw the absence of metropolitan segments as a flaw that made the autobahn a poor model for America's future. Unlike Germany, traffic volumes were high in America where car ownership was widespread. Congestion in America's cities had long been a serious complaint that MacDonald and Fairbank would address in their vision of the Interstate System. (For information, see "The Genie in the Bottle")

    In short, where Germany had intended to build the highways first and the vehicles second, America had the vehicles and no clear plan for building the highway network.

    For Eisenhower, the vision of the autobahn was strong in his mind as he became President. Years later, he would explain that "after seeing the autobahns of modern Germany and knowing the asset those highways were to the Germans, I decided, as President, to put an emphasis on this kind of road building. ... The old [1919] convoy had started me thinking about good, two-lane highways, but Germany had made me see the wisdom of broader ribbons across the land."

    •Richard Weingroff

    Updated: 10/21/2013



    The Autobahn

    The German Autobahn has taken on an almost legendary mystique. The reality is a little different than the legend. The myth of no speed limits is countered by the fact that Tempolimits are a fact of life on most of Germany’s highways, and traffic jams are common.

    Please Register or Log in to view the hidden image!


    PHOTO: Hyde Flippo

    Signs suggesting a recommended speed limit of 130 km/h (80 mph) are posted along most autobahns, while urban sections and a few dangerous stretches sometimes have posted speed limits as “low” as 100 km/h (62 mph). The fact is that Germany’s autobahn system is an extensive network of limited-access freeways that can usually provide a driver with a speedy route from city to city.

    Within six years after the completion of the first Cologne-Bonn autobahn in 1932, Germany added 3,000 kilometers (1,860 miles) of super highway to its road network. Although Hitler has often been given credit for the autobahn, the real precursors were the Avus experimental highway in Berlin (built between 1913 and 1921) and Italy’s 130-kilometer autostrada tollway between Milan and the northern Italian lakes (completed in 1923). Although Germany’s depressed economy and hyperinflation of the late 1920s prevented plans for new autobahns from being carried out at the time, many miles of roadway were built during the time of the Third Reich. Hitler saw the construction of autobahns primarily as a military advantage; its benefit as a job-creation program in the 1930s was an added plus.

    Today’s German autobahn system stretches 11,000 km (6,800 miles) across most parts of Germany. Plans to increase the number and length of autobahns and other highways have often met with citizen opposition on ecological grounds. One example, a proposed stretch of autobahn along the Baltic coast in northern Germany, has been surrounded by controversy by those concerned with quality-of-life issues versus those who see economic benefits for the region.

    Austria also has an autobahn network, but unlike Germany, motorists must purchase a special toll sticker and display it on the windshield in order to drive on Austrian autobahns and Schnellstraßen (two-lane limited access highways). Some mountain autobahns and tunnels are toll (Maut) highways run by public companies. They are not covered by the standard Austrian autobahn toll sticker. The speed limit on Austrian autobahns is 130 km/h (80 mph). See more below.

    Die Autobahnpickerl (Autobahn toll sticker)
    Austria and Switzerland charge drivers a toll for the use of their autobahns. Both countries use a Vignette (autobahn sticker) that must be displayed on a car’s windshield. But the two countries don’t have the same fees or system. More: Autobahn Toll Sticker in Austria and Switzerland.


    Adapted from a chapter in THE GERMAN WAY by Hyde Flippo
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    History of the Soft Drink Fanta

    There are some who claim that Fanta, a popular soft drink produced and distributed by the Coca-Cola company, was actually invented by Nazis during the Third Reich. Others go so far as the say that Coca-Cola produced this product themselves to sell in Nazi Germany when they feared the backlash that might come if they marketed Coca-Cola to both Allied and Axis powers at the same time. Is there any truth to these accusations?

    Coca-Cola was a tremendously popular beverage in post-war Germany. Germany was its most successful market and many people, including the Nazi's enjoyed it. That did not end with the beginning of World War II, although the Coca-Cola company in Germany found it increasingly difficult to procure the necessary ingredients to make the beverage. When the American born director of the German Coca-Cola company died in 1938, the German born Max Keith took over. Max Keith is the man who invented Fanta.

    The war had essentially isolated the German branch of the Coca-Cola company from Atlanta and from the rest of the world. Thus, the only way that Keith could communicate with the company's headquarters was through Coca-Cola's Swiss company. Although this connection through a neutral country allowed some limited communication with the company's headquarters, Keith could not use it to obtain the necessary ingredients for making the popular beverage. He had to come up with something else.

    What he came up with is what we now call Fanta. It is called that because when telling his employes to let their imaginations ("fantasies" in German) run wild, someone offered that "fanta" itself would be a good name.

    The beverage was originally made with what limited ingredients Keith had at his disposal. For example, he used whey, a byproduct of making cheese, and apple fiber, a byproduct of making cider. He also used a sugar substitute and whatever fruits he could obtain. The necessity of having to use different fruits as necessary accounts for the great variety of fruit flavors we still see in Fanta today.

    By this time, the German government had placed Keith in charge of all of Coca-Cola's properties in Germany and all occupied countries. Thus, he was in a powerful position to make a serious profit himself, if he wanted. He could have continued bottling under his own name and made himself rich. He proved a good steward of the company, however, and kept the company going during the war, saving many jobs. At the same time, Keith refused to join the Nazi party even though under pressure to do so.

    Fanta did not come out of the war spotless, however. The German Coca-Cola company probably used forced labor during the later years of the war. It also gave German soldier the last of the original Coca-Cola it had in 1941 and advertised with the Nazi party extensively prior to and during World War II.

    It is difficult to say, however, what Keith should have done during the war. If he had not cooperated with the Nazi government, he would have been simply removed and replaced with someone who would probably not have been as good a steward of the company as he was. After the war, he handed his profits back to the Coca-Cola company who bought the recipe for Fanta in 1960. It has been distributing Fanta ever since.

    So the Coca-Cola company itself did not make the product for the Nazis nor was it invented by a Nazi. It was invented by the German head of the Coca-Cola company during the war when he could no longer produce Coca-Cola. Nazis may have been among those to whom he marketed the new product, but it was not designed specifically for them.


    History of Fanta

    Fanta is manufactured by Coca Cola for international markets. It is best known as orange soda, although it comes in grape, lemon, lime and other flavors. Until it comes in banana, orange is this monkey's flavor of choice.

    Fanta was born in the austerity of post-war Germany, when the Coca-Cola company had to use sugar beet rather than cane to sweeten it, and the name is based on ‘Fantasie’.

    In the period leading up to World War II, between 1930 and 1936, Coca-Cola set up a division of the company in Germany, and continued that venture during the war.

    It recreated its image as a German company and allowed the Germans to produce all but two, secret, Coca-Cola ingredients in their own factories.

    In 1941 the German company's president, Max Keith, developed Fanta orange soda using orange flavoring and all the German-made Coke ingredients.

    Despite the increasing devastation caused by Allied bombing, for most the war the German Coke company maintained profitable annual sales figures of about sixty million bottles.

    In 1960, Coca-Coca added its first new line in the United States, Fanta. Fanta products, which come in a variety of fruit flavors such as orange and grape, had been sold by Coke bottlers in other countries for many years.

    Whereas Sprite was an American campaign exported around the world, Fanta has its strength in overseas market.

    Fanta in the early 1970s, were attacked because they had artificial color. Competitors used this to demean the product even though the coloring was quite safe. The company replaced the artificial coloring with natural coloring, but the impact on product sales was severe for about five years before began to grow again.

    As leading global soft drink brand, Fanta launched a campaign building on overseas market positions and emphasizing Fanta as a fashion statement.

    In 1979, Coca-Cola entered the Soviet Union with Fanta Orange Soda.
     
  8. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    History of Root Beer during American Colonial

    There are early historical documents in which Shakespeare is noted to have drunk "small beers." This European brew actually was invented in colonial America.

    The first settlers in America could not buy drinks such as they had had in England, and in a new country they often could not make them. So they found of making other drinks in place of them: root beer.

    The recipe, contained 2-12-percent alcohol, and was considered a light, social drink made from herbs, berries and bark.

    Other beverages of the time included birch beer, sarsaparilla beer and ginger beer. Root beer was brewed in colonial times from a variety of substances first used by Indians for healing purposes.

    Ingredients in early root beers included allspice, birch bark, coriander, juniper, ginger, winter green, hops, burdock root, dandelion root, spikenard, pipisissewa, guaiacum chips, sarsaparilla, spicewood, cherry bark, yellow dock prickly ash bark, sassafras root, vanilla beans, hops, dog grass, molasses and licorice.

    Only root beer would emerge as a longtime favorite. In Pennsylvania, sassafras oil was used to flavor beer, which became known as root beer. When the soft drink version of root beer became popular, sassafras oil remained a key ingredient.

    In the nineteenth century, American cowboys often ordered ‘sarsaparilla’ or root beer because it was both the most widely used treatment for syphilis and was also considered a male aphrodisiac.


    Early History of Root Beer A & W

    Please Register or Log in to view the hidden image!



    Roy Allen, who refurbished old hotels, met a pharmacists who had perfected a recipe for making root beer.

    Allen bought the recipe and on June 20, 1919, opened a root beer stand in Lodi, California, offering frosty mugs of root beer for a nickel.

    Shortly thereafter, he opened more stands in Stockton and Sacramento, one of which may have been a drive in.

    In 1920, Frank Wright, an employee at the Stockton stand, became Allen’s partner they combine their initial and called the company A & W Root Beer.

    Additional A & W stands were opened throughout California, Utah an Texas.

    Allen eventually bought out Wright trademark the A & W became one of the first fast food franchise chains in the country.

    Franchisees paid a small licensing fee, displayed the A & W logo, and bought root beer syrup from Allen.

    Other than these connections, little commonality existed among franchisees - no common architecture, no common menu, and no common procedures or national advertising.

    Some A & W Root Beer franchisees began selling food, including hamburgers and hot dogs, along with root beer. Some early A & W Root Beer stands were drive ins, featuring tray-boys ad tray –girls, later renamed carhops, who bought orders to customers in their car outside.

    The depression affected franchises differently. Some went out of business but others opened more new stands. In 1933, A & W had 170 outlets; by 1942, it had stands nationwide.

    The war years between 1941 and 1945, it had 260 stands nationwide. The war between 1941 and 1945, on the other hand, were very difficult time for A & W.

    There were labor shortages and sugar shortages, and by the tine war ended many franchises had closed. After the war, however A & W rapidly expanded.

    During the 1950s, Roy Allen sold the business to a Nebraskan, Gene Hurtz, who formed the A & W Root Beer Company.

    Within ten years, the number of A & W outlets had increases to more than 2,000.

    In 1956, an A & W Root Beer outlet opened in Canada followed by Guam and the Philippines.


    History of Root Beer

    Root beer is non alcoholic and made using extracts or flavored syrups which then diluted into carbonated water.

    There are early historical documents in which Shakespeare is noted to have drank "small beers." This European brew, actually made from an early colonial American recipe, contained 2-12-percent alcohol, and was considered a light, social drink made from herbs, berries and bark. During American Colonial times, root beer was introduced along with other beverages like Birch Beer, Sarsaparilla Beer, and Ginger Beer. Ingredients in early root beers included allspice, birch bark, coriander, juniper, ginger, winter green, hops, burdock root, dandelion root, spikenard, pipisissewa, guaiacum chips, sarsaparilla, spicewood, cherry bark, yellow dock prickly ash bark, sassafras root, vanilla beans, hops, dog grass, molasses and licorice. Only root beer would emerge as a longtime favorite. There are even historical documents which show 18th century farm owners brewing an alcoholic version of root beer in backyard stills for family get-togethers, social events, and parties.

    Meanwhile, Charles Hires, Philadelphia pharmacist, was on his honeymoon around the same time when he discovered an herbal tea. After taking the recipe of herbs, berries and roots home to Philadelphia with him, he began selling a packaged dry mixture to the public made from many of the same ingredients as the original herbal tea. Well received, Hires soon developed a liquid concentrate blended together from more than 25 herbs, berries and roots.

    Charles Hires almost named his new formulated drinks as ‘root tea’. This due to it was made of tea brewed from roots and herbs. Then Hires was persuaded to switch the name to ‘root beer’ to appeal larger market of hard-drinking Pensylvania miners.

    The public loved the new drink and as a result, Hires introduced commercial root beer to the public in 1876 at the Philadelphia Centennial Exhibition. In no time, it became a popular drink of its day. The Hires family continued to manufacture root beer and in 1893 first sold and distributed root beer in bottle.
     
  9. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    History of Pepsi Cola

    One of New Bern’s most trumpeted achievements was the invention of Pepsi-Cola. The summer of 1898, as usual, was hot and humid in New Bern, North Carolina. So a young pharmacist named Caleb Bradham began experimenting with combinations of spices, juices, and syrups trying to create a refreshing new drink to serve his customers.

    He planned to use the drink to cure upset stomachs.

    He succeeded beyond all expectations because he invented the beverage known around the world as Pepsi-Cola. Caleb Bradham knew that to keep people returning to his pharmacy, he would have to turn it into a gathering place.

    His creation, a unique mixture of kola nut extract, vanilla and rare oils. It was not originally name Pepsi. He named it after himself and called it ‘Brad ‘s Drink.’ Caleb decided to rename it "Pepsi-Cola," after its two main ingredients, pepsin and the cola nut and advertised his new soft drink. People responded, and sales of Pepsi-Cola started to grow, convincing him that he should form a company to market the new beverage.

    In 1902, he launched the Pepsi-Cola Company in the back room of his pharmacy, and applied to the U.S. Patent Office for a trademark.

    At first, he mixed the syrup himself and sold it exclusively through soda fountains. But soon Caleb recognized that a greater opportunity existed to bottle Pepsi so that people could drink it anywhere.

    The business began to grow, and on June 16, 1903, "Pepsi-Cola" was officially registered with the U.S. Patent Office. That year, Caleb sold 7,968 gallons of syrup, using the theme line "Exhilarating, Invigorating, Aids Digestion."

    In 1904, Caleb Bradham purchased the Bishop Factory at Johnson and Hancock Streets, converting it into the first Pepsi Cola factory.

    He also began awarding franchises to bottle Pepsi to independent investors, whose number grew from just two in 1905, in the cities of Charlotte and Durham, North Carolina, to 15 the following year, and 40 across the United States by 1907. By the end of 1910, there were Pepsi-Cola franchises in 24 states with 300 bottles.

    Building a strong franchise system was one of Caleb's greatest achievements. Local Pepsi-Cola bottlers, entrepreneurial in spirit and dedicated to the product's success, provided a sturdy foundation.

    They were the cornerstone of the Pepsi-Cola enterprise. By 1907, the new company was selling more than 100,000 gallons of syrup per year.

    Growth was phenomenal, and in 1909 Caleb erected a headquarters so spectacular that the town of New Bern pictured it on a postcard. Famous racing car driver Barney Oldfield endorsed Pepsi in newspaper ads as "A bully drink...refreshing, invigorating, a fine bracer before a race."

    The previous year, Pepsi had been one of the first companies in the United States to switch from horse-drawn delivery cart to motor vehicles, and Caleb's business expertise captured widespread attention.

    He was even mentioned as a possible candidate for Governor. A 1913 editorial in the Greensboro Patriot praised him for his "keen and energetic business sense."

    Pepsi-Cola enjoyed 17 unbroken years of success. Caleb now promoted Pepsi sales with the slogan, "Drink Pepsi-Cola. It will satisfy you." Then came World War I, and the cost of doing business increased drastically. Sugar prices see sawed between record highs and disastrous lows, and so did the price of producing Pepsi-Cola.

    The price sugar reached 26 cents per pound in 1920. Unfortunately, Bradham gambled on the fluctuations of sugar prices during World War I, betting that sugar prices would continue to rise, but they fell instead. He purchased large blocks of sugar stock, only to watch price plummet to 2 cents per pound by the end of the year.

    Please Register or Log in to view the hidden image!



    Pepsi-Cola declared bankrupt in 1923. The assets were sold to a North Carolina Company who, in turn sold the operation to a Wall Street broker for $35,000.

    By 1931, only two plants remained open. Five a year later the company bankrupt again and it wasn't until a successful candy manufacturer, Charles G. Guth, appeared on the scene that the future of Pepsi-Cola was assured.

    Guth was president of Loft Incorporated, a large chain of candy stores and soda fountains along the eastern seaboard. He saw Pepsi-Cola as an opportunity to discontinue an unsatisfactory business relationship with the Coca-Cola Company, and at the same time to add an attractive drawing card to Loft's soda fountains. He was right. After five owners and 15 unprofitable years, Pepsi-Cola was once again a thriving national brand.

    The formula for Pepsi was change at this time. The new formula eliminated pepsin as a major ingredient. By 1934, Pepsi-Cola had turned the corner and began purchasing bottling operation throughout the United States.

    One oddity of the time, for a number of years, all of Pepsi-Cola's sales were actually administered from a Baltimore building apparently owned by Coca-Cola, and named for its president.

    Within two years, Pepsi would earn $1 million for its new owner. With the resurgence came new confidence, a rarity in those days because the nation was in the early stages of a severe economic decline that came to be known as the Great Depression.

    Pepsi’s net earning had risen to more than $5.5 million by 1939.

    Today, the company is a diversified conglomerate with a complete product line across the food and beverage categories. Pepsi is one of the most popular drinks in the world.

    The company also have non-carbonated rink Gatorade, acquired with the purchase of Quaker Foods, leading the sport drinks market.
     
  10. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    Mathew Brady
    Photographer
    c.1823- January 16, 1896

    Please Register or Log in to view the hidden image!



    Mathew Brady is often referred to as the father of photojournalism and is most well known for his documentation of the Civil War. His photographs, and those he commissioned, had a tremendous impact on society at the time of the war, and continue to do so today. He and his employees photographed thousands of images including battlefields, camp life, and portraits of some of the most famous citizens of his time including Abraham Lincoln and Robert E. Lee.

    Brady was born in Warren County, New York in the early 1820’s to Irish immigrants, Andrew and Julia Brady. Little is known about his early life, but historians believe that during a trip to the Albany area, in search of a cure for an eye inflammation, he met portrait painter William Page. It is also believed that through William Page, Brady met Samuel F.B. Morse. Morse, a professor of art, painting, and design at New York University and the inventor of the telegraph likely tutored Brady in the newly developed technology of daguerreotypy, the process of creating a mirror image on a silver-surfaced copper plate.

    After moving to New York City, Brady began manufacturing cases for daguerreotypes, jewelry, and painted miniature portraits. He worked to build his skill and his reputation, opening, "The Daguerrean Miniature Gallery" on Broadway in 1844. Well known and accomplished in his profession, Brady won the highest award at the American Institute’s annual fair in 1844, 1845, 1846, 1849, and 1857, during which time he also began photographing well known Americans such as Edgar Allan Poe and James Fennimore Cooper.

    Brady opened a studio in Washington DC and began making daguerreotypes of prominent politicians such as Henry Clay, Daniel Webster, John C. Calhoun, Zachary Taylor, and Millard Fillmore. In 1850 he published "The Gallery of Illustrious Americans," which sold for $15, equivalent to about $400 today. In 1851 Brady won medals at the Fair of All Nations in London and at New York’s Industrial Exhibition at Crystal Palace for his daguerreotypes.

    At the outbreak of the Civil War, Brady sought to create a comprehensive photo-documentation of the war. At his own expense, he organized a group of photographers and staff to follow the troops as the first field-photographers. Brady supervised the activities of the photographers, including Timothy H. Sullivan, Alexander Gardner, and James F. Gibson, preserved plate-glass negatives, and bought from private photographers in order to make the collection as complete as possible. Brady and his staff photographed many images of the Civil War including the Fist Battle of Bull Run, Antietam, and Gettysburg.

    In 1862 Brady shocked the nation when he displayed the first photographs of the carnage of the war in his New York Studio in an exhibit entitled "The Dead of Antietam." These images, photographed by Alexander Gardner and James F. Gibson, were the first to picture a battlefield before the dead had been removed and the first to be distributed to a mass public. These images received more media attention at the time of the war than any other series of images during the rest of the war A New York Times article in October, 1862, illustrates the impression these images left upon American culture stating, "Mr. Brady has done something to bring home to us the terrible reality and earnestness of war. If he has not brought bodies and laid them in our door-yards and along the streets, he has done something very like it…"

    By the end of the war Brady had accumulated serious debt in hopes of selling his collection to the New York Historical Society; however, the deal fell through. Fortunately for the American public Brady sold his collection to the United States government in 1875 for $25,000, just enough to pay off the debt he had accrued.

    Following the war Brady continued to work in Washington DC with his nephew Levin Handy, who was also a photographer. In 1895 Brady suffered two broken legs as a result of a traffic accident. Having never fully recovered, Brady died on January 16, 1896 in New York. His funeral was financed by the New York 7th Regiment Veteran’s Association. Brady is buried beside his wife in Congressional Cemetery in Washington DC.

    Please Register or Log in to view the hidden image!



    Mathew Brady

    Please Register or Log in to view the hidden image!



    Mathew Brady (1822-96) was a well-known 19th-century American photographer who was celebrated for his portraits of politicians and his photographs of the American Civil War (1861-65). In addition to his own work, Brady employed a team of assistants who fanned out across the country to capture the war. Together, they produced more than 10,000 images of the conflict, and brought the gruesome realities of warfare home to the American public.

    In his portraits of prominent Americans in the late 1840s and 1850s and in the camp and battlefield views made under his aegis during the Civil War, Mathew Brady helped define a role for American photographers as historians of contemporary life. Although he operated a camera himself only infrequently-he was hampered by poor eyesight-he shaped, more effectively than any of his contemporaries, an identity for photography as a force in American society, politics, and culture.

    Please Register or Log in to view the hidden image!



    Did You Know?
    Few of the photographers Brady hired to document the Civil War received individual credit for their work.

    In 1839, the same year Louis-Jacques-Mandé Daguerre announced his invention of photography in Paris, the young Brady arrived in New York City from his upstate New York home where he had been born to Irish parents. After a brief stint as a clerk in the A. T. Stewart department store and a few years as a manufacturer of jewelry cases (including cases for daguerreotypes), he opened a daguerreotype portrait studio at the corner of Broadway and Fulton streets in 1844. In the growing competition among professional daguerreotypists Brady became expert in advertising himself and attracting prominent sitters. ‘Brady of Broadway’ became the most widely recognized and admired photographic trademark of the antebellum era.

    The inaugural issue of the Photographic Art-Journal in 1851 described him as the ‘fountainhead’ of the young profession of portrait photography. In the same year he was awarded one of three gold medals for daguerreotypes at the Crystal Palace Exhibition in London (the other two also went to Americans). In the 1850s his trade, now including paper prints, expanded rapidly; he moved his gallery into more sumptuous quarters uptown and in 1858 opened a branch in Washington, D.C. With his portraits of public figures appearing regularly as engravings in the national press, Brady had immense influence on the times. His famous Cooper Union portrait of Abraham Lincoln during the presidential campaign of 1860 contributed in no small way to making Lincoln a popular figure.

    But Brady’s greatest success lay in his organization of a corps of Civil War photographers who followed the armies and produced an incomparable firsthand record of the war years. The pictures he acquired and published represent one of the greatest collective depictions in photography of a major historical event. Brady, however, never recovered from the loss of the private fortune he invested in this project, and his career declined precipitously during the Gilded Age. When he died in 1896 he was close to destitution.

    Although he made no profit from it in his lifetime, his collection of Civil War pictures, including many antebellum portraits of prominent figures of the war years, finally made its way into national archives, where it remains the chief source of visual information about the period and the war. Interest in Mathew Brady revived in the 1930s, and his work exerted a major influence on the documentary movement in photography in the depression era.

    Please Register or Log in to view the hidden image!



    The Reader’s Companion to American History. Eric Foner and John A. Garraty, Editors.

    Please Register or Log in to view the hidden image!

     
  11. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    The Forgotten History of Gay Marriage by Paul Canning April 15, 2014

    Republicans and other opponents of gay marriage often speak of marriage as being a 2,000 year old tradition (or even older). Quite apart from the fact that the definition of marriage has changed from when it was a business transaction, usually between men, there is ample evidence that within just Christian tradition, it has changed from the point where same-sex relationships were not just tolerated but celebrated.

    In the famous St. Catherine’s monastery on Mount Sinai, there is an icon which shows two robed Christian saints getting married. Their ‘pronubus’ (official witness, or “best man”) is none other than Jesus Christ.

    The happy couple are 4th Century Christian martyrs, Saint Serge and Saint Bacchus — both men.

    Severus of Antioch in the sixth century explained that “we should not separate in speech [Serge and Bacchus] who were joined in life.” More bluntly, in the definitive 10th century Greek account of their lives, Saint Serge is described as the “sweet companion and lover (erastai)” of St. Bacchus.

    Legend says that Bacchus appeared to the dying Sergius as an angel, telling him to be brave because they would soon be reunited in heaven.

    Yale historian John Richard Boswell discovered this early Christian history and wrote about it nearly 20 years ago in “Same Sex Unions In Pre-Modern Europe“ (1994).

    In ancient church liturgical documents, he found the existence of an “Office of Same Sex Union” (10th and 11th century Greek) and the “Order for Uniting Two Men” (11th and 12th century Slavonic).

    He found many examples of:
    A community gathered in a church
    A blessing of the couple before the altar
    Their right hands joined as at heterosexual marriages
    The participation of a priest
    The taking of the Eucharist
    A wedding banquet afterwards

    A 14th century Serbian Slavonic “Office of the Same Sex Union,” uniting two men or two women, had the couple having their right hands laid on the Gospel while having a cross placed in their left hands. Having kissed the Gospel, the couple were then required to kiss each other, after which the priest, having raised up the Eucharist, would give them both communion.

    Boswell documented such sanctified unions up until the 18th century.

    In late medieval France, a contract of “enbrotherment” (affrèrement) existed for men who pledged to live together sharing ‘un pain, un vin, et une bourse’ – one bread, one wine, and one purse.

    Other religions, such as Hinduism and some native American religions, have respect for same-sex couples weaved into their history.

    When right-wing evangelical Christians talk about “traditional marriage,” there is no such thing.



    Please Register or Log in to view the hidden image!





    Odd couples : a history of gay marriage in Scandinavia [monograph] Rydström, Jens

    Abstract :
    The concept of marriage as a union of a man and a woman was fundamentally challenged by the introduction of registered partnership in Denmark in 1989. 'Odd couples: a history of gay marriage in Scandinavia' is the first comprehensive history of registered partnership and gay marriage in Scandinavia. It presents an outstanding study of the interaction between gay activism and traditional party politics. Based on interviews, parliamentary print and party documents, it gives a first-hand account of how the political stakeholders acted in a short and decisive period of Scandinavian history. The author traces the origins of laws which initially were extremely controversial - inside and outside the gay community - but have now gained broad popular and political support. The different experiences in all Scandinavian countries (Denmark - including Greenland and the Faroe Islands -, Norway, Sweden, Iceland, and Finland) are investigated in order to present a nuanced understanding of a fascinating political process that began in the 1960s and continues to change the ways we understand family, sexuality and nation.



    Boston marriage as a term is said to have been in use in New England in the decades spanning the late 19th and early 20th centuries to describe two women living together, independent of financial support from a man.

    The fact of relatively formalized romantic friendships between women predates the term "Boston marriage" and there is a long record of it in England and other European countries.[1] The term "Boston marriage" became associated with Henry James's The Bostonians (1886), a novel involving a long-term co-habiting relationship between two unmarried women, "new women," although James himself never used the term. James' sister Alice, who lived in such a relationship with another woman, Louise, was among his sources for such a relationship.[2]

    There are many examples of women in "Boston marriage" relationships. Perhaps the most famous of these romantic friendships was the late 1700s relationship between two Irish upper-class women, Eleanor Butler and Sarah Ponsonby, nicknamed the Ladies of Llangollen. Elizabeth Mavor suggests that the institution of romantic friendships between women reached a zenith in eighteenth-century England.[1] In the U.S., perhaps the best known example is that of the relationship of the novelist Sarah Orne Jewett and her companion Annie Adams Fields, widow of the editor of The Atlantic Monthly, during the late 1800s.

    Lillian Faderman provided one of the most comprehensive studies of Boston marriages in Surpassing the Love of Men (1981). Reviewers used the term to describe the Jewett-Fields relationship depicted in the 1998 documentary film Out of the Past.[3] David Mamet's play Boston Marriage, which premiered in 2000, helped popularize the term.

    Some women did not marry because they felt they had a better connection to women than to men. Some of these women lived together. Of necessity, such women were generally financially independent due to family inheritance or career earnings. Women who chose to have a career (doctor, scientist, professor) created a new class of women who were not dependent on men. Educated women with careers who wanted to live with other women were allowed a measure of social acceptance and freedom to arrange their own lives.[4] They were usually feminists with shared values, involved in social and cultural causes. Such women were generally self-sufficient in their own lives, but gravitated to each other for support in an often disapproving and sometimes hostile society.
     
    Last edited by a moderator: Jun 29, 2014
  12. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    The US Senate Once Tried to Ban Dial Telephones From Capitol Hill

    Please Register or Log in to view the hidden image!



    Today for most people who own cell phones, manual dialing is largely a thing of the past: your contacts are stored in your phone and you rarely have to type in a new number.

    But that wasn’t the case in 1930: back then it was normal to pick up a phone and be connected with an operator who would then place your call for you. And for some in the early 20th Century, the thought of dialing your own phone number was downright scary.

    In 1930 the US Senate took up the pressing issue of new manual-dial phones with the following resolution when dial phones were installed in their Congressional offices:

    Whereas dial telephones are more difficult to operate than are manual telephones; and Whereas Senators are required, since the installation of dial phones in the Capitol, to perform the duties of telephone operators in order to enjoy the benefits of telephone service; and Whereas dial telephones have failed to expedite telephone service; Therefore be it resolved that the Sergeant at Arms of the Senate is authorized and directed to order the Chesapeake and Potomac Telephone Co. to replace with manual phones within 30 days after the adoption of this resolution, all dial telephones in the Senate wing of the United States Capitol and in the Senate office building.

    Bill sponsors hoped the measure would convince the phone company to remove dial phones from all of Washington, DC, not just Capitol Hill. The motion passed and though younger Senators preferred to dial their own numbers rather than wait for an operator to connect them to the party they wished to reach, the phones were banned. At least temporarily– a later compromise allowed Senators to choose which type of phone they wanted for their office.


    18 American Presidents Didn’t Have a Vice President For All or Part of Their Terms

    We’re used to seeing a president and a vice president, but more than a dozen times throughout American history, there hasn’t been a sitting VP.

    The first American president to spend part of his time in office without a #2 was James Madison who was savvy enough to win two terms in office. One thing he wasn’t good at? Choosing his vice presidents apparently. Both men he chose for his first and second terms died part of the way through them, so he simply finished his terms without a veep!

    The situation arose again for a longer period of time in 1841 when John Tyler left his spot as the vice president to become the president (his president, William Henry Harrison, died after just a month in office). Tyler served the entirety of his term without a vp. A similar situation occurred with Millard Fillmore who became president after his #1 died as well.

    Luckily Richard Nixon had the good sense to appoint a second vice president after his first (Spiro Agnew) resigned when caught taking bribes. Well, good sense, plus the 25th Amendment (ratified in 1967) which required a Vice President at all times. When Nixon himself resigned in disgrace, his second vice president, Gerald Ford, became president. And thus ended the tradition of leaving the vice presidential slot empty: America has never been without a vice president since.


    The 9th US President Died of a Cold… Which he May Have Caught at His Own Inauguration Ceremony

    The president who served the shortest period of time after being elected to office was William Henry Harrison. Harrison was president for only 30 days, 12 hours and 32 minutes before keeling over at age 68. The circumstances under which President Harrison, the first ever to die in office, died are disputed until this day.

    Harrison was elected in 1840 running as a rugged, tested and weathered war hero. The day that Harrison was sworn into office was rainy and cold, and to make matters worse, the newly elected president chose to deliver his entire 8,444-word speech to the assembled crowd (and this was after it had been edited for length by a friend). The speech, which still ranks as the longest inaugural speech in American history, took two hours to read.

    Please Register or Log in to view the hidden image!



    Perhaps this was not the smartest choice in retrospect. Also not so smart of him: refusing to wear a hat or even a coat in the pouring rain.

    A month later he was dead of pneumonia, which he may have contracted while he was savoring every moment of his inauguration day out in the rain. It’s unclear whether he came down with the illness at the inauguration or afterwards, but what is known is that the cures of the day, which included opium, snakes and caster oil.

    His grandson, Benjamin Harrison, later became the 23rd president of the United States. On the day that the younger Harrison was sworn in, he reportedly wore a full suit of leather armor– just in case. He lived on to serve a complete term, although later ironically died of pneumonia as well.


    Republican Leader Dick Armey Was Caught Referring to a Congressman as a “Fag”

    The Republican Party has never had a rosy relationship with gays and lesbians, and this incident probably didn’t help much.

    It was January 1995, and Republicans had swept into power a few months beforehand. The new House Majority Leader was Dick Armey, a conservative Republican.

    Armey, who had been elected from Texas (where a string of murders of gays had recently happened), was known for opposing hate crimes legislation protecting gays and lesbians.

    Above: A real Dick, Mr Armey In criticizing a fellow Representative, Barney Frank (D-Massachusetts), who is openly gay, Armey referred to him as “Barney Fag” in an interview.

    According to a column published in the New York Times a short time after the incident, Representative Armey tried to get reporters not to air the tape of him using the word and, failing that, attacked the press for reporting about it.

    Armey later claimed that he merely mispronounced Representative Frank’s last name. Frank responded after hearing of the incident, “I rule out that it was an innocent mispronunciation… I turned to my own expert, my mother, who reports that in 59 years of marriage, no one ever introduced her as Elsie Fag.”


    White Supremacists Once Overthrew a US City Government

    Although it’s hard to believe now, in 1898 a group of white supremacists seized power of the municipal government in Wilmington, North Carolina following a bloody coup d’etat.

    The incident, which followed reconstruction in the South after the Civil War, is the only time in American history that any form of US government has been overthrown. Back then, Democrats were a party of conservative racists who hated big government, and Republicans were anti-bigotry liberals who thought the government should do more to enforce equality among blacks and whites (particularly in the South).

    Please Register or Log in to view the hidden image!



    Just after an election which swept the Democrats into power in the North Carolina state legislature, an emboldened mob of white supremacists overtook the City Hall of Wilmington, which was the largest city in the state at the time.

    Supremacists killed dozens of people, and forced the sitting Republican mayor and his administration (as well as several members of city council) to resign, virtually if not literally at gunpoint. After the supremacists had usurped power, they put in place mob leader and former confederate soldier Alfred Moore Waddell, who had previously served in the US House of Representatives.

    Waddell stayed mayor for six years, before retiring at age 72. Eventually, elections and retirements replaced the other coup leaders who ran Wilmington, and none were ever brought to justice for their illegal seizure of power. The incident is rarely taught about in North Carolina schools.
     
  13. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    Early Death Penalty Laws

    The first established death penalty laws date as far back as the Eighteenth Century B.C. in the Code of King Hammaurabi of Babylon, which codified the death penalty for 25 different crimes. The death penalty was also part of the Fourteenth Century B.C.'s Hittite Code; in the Seventh Century B.C.'s Draconian Code of Athens, which made death the only punishment for all crimes; and in the Fifth Century B.C.'s Roman Law of the Twelve Tablets. Death sentences were carried out by such means as crucifixion, drowning, beating to death, burning alive, and impalement.

    In the Tenth Century A.D., hanging became the usual method of execution in Britain. In the following century, William the Conqueror would not allow persons to be hanged or otherwise executed for any crime, except in times of war. This trend would not last, for in the Sixteenth Century, under the reign of Henry VIII, as many as 72,000 people are estimated to have been executed. Some common methods of execution at that time were boiling, burning at the stake, hanging, beheading, and drawing and quartering. Executions were carried out for such capital offenses as marrying a Jew, not confessing to a crime, and treason.

    The number of capital crimes in Britain continued to rise throughout the next two centuries. By the 1700s, 222 crimes were punishable by death in Britain, including stealing, cutting down a tree, and robbing a rabbit warren. Because of the severity of the death penalty, many juries would not convict defendants if the offense was not serious. This lead to reforms of Britain's death penalty. From 1823 to 1837, the death penalty was eliminated for over 100 of the 222 crimes punishable by death. (Randa, 1997)

    The Death Penalty in America

    Britain influenced America's use of the death penalty more than any other country. When European settlers came to the new world, they brought the practice of capital punishment. The first recorded execution in the new colonies was that of Captain George Kendall in the Jamestown colony of Virginia in 1608. Kendall was executed for being a spy for Spain. In 1612, Virginia Governor Sir Thomas Dale enacted the Divine, Moral and Martial Laws, which provided the death penalty for even minor offenses such as stealing grapes, killing chickens, and trading with Indians.

    Laws regarding the death penalty varied from colony to colony. The Massachusetts Bay Colony held its first execution in 1630, even though the Capital Laws of New England did not go into effect until years later. The New York Colony instituted the Duke's Laws of 1665. Under these laws, offenses such as striking one's mother or father, or denying the "true God," were punishable by death. (Randa, 1997)


    The Abolitionist Movement

    Colonial Times

    The abolitionist movement finds its roots in the writings of European theorists Montesquieu, Voltaire and Bentham, and English Quakers John Bellers and John Howard. However, it was Cesare Beccaria's 1767 essay, On Crimes and Punishment, that had an especially strong impact throughout the world. In the essay, Beccaria theorized that there was no justification for the state's taking of a life. The essay gave abolitionists an authoritative voice and renewed energy, one result of which was the abolition of the death penalty in Austria and Tuscany. ( Schabas 1997)

    American intellectuals as well were influenced by Beccaria. The first attempted reforms of the death penalty in the U.S. occurred when Thomas Jefferson introduced a bill to revise Virginia's death penalty laws. The bill proposed that capital punishment be used only for the crimes of murder and treason. It was defeated by only one vote.

    Also influenced was Dr. Benjamin Rush, a signer of the Declaration of Independence and founder of the Pennsylvania Prison Society. Rush challenged the belief that the death penalty serves as a deterrent. In fact, Rush was an early believer in the "brutalization effect." He held that having a death penalty actually increased criminal conduct. Rush gained the support of Benjamin Franklin and Philadelphia Attorney General William Bradford. Bradford, who would later become the U.S. Attorney General, led Pennsylvania to become the first state to consider degrees of murder based on culpability. In 1794, Pennsylvania repealed the death penalty for all offenses except first degree murder. (Bohm, 1999; Randa, 1997; and Schabas, 1997)

    Nineteenth Century

    In the early to mid-Nineteenth Century, the abolitionist movement gained momentum in the northeast. In the early part of the century, many states reduced the number of their capital crimes and built state penitentiaries.In 1834, Pennsylvania became the first state to move executions away from the public eye and carrying them out in correctional facilities.

    In 1846, Michigan became the first state to abolish the death penalty for all crimes except treason. Later, Rhode Island and Wisconsin abolished the death penalty for all crimes. By the end of the century, the world would see the countries of Venezuela, Portugal, Netherlands, Costa Rica, Brazil and Ecuador follow suit. (Bohm, 1999 and Schabas, 1997).

    Although some U.S. states began abolishing the death penalty, most states held onto capital punishment. Some states made more crimes capital offenses, especially for offenses committed by slaves. In 1838, in an effort to make the death penalty more palatable to the public, some states began passing laws against mandatory death sentencing instead enacting discretionary death penalty statutes. The 1838 enactment of discretionary death penalty statutes in Tennessee, and later in Alabama, were seen as a great reform. This introduction of sentencing discretion in the capital process was perceived as a victory for abolitionists because prior to the enactment of these statutes, all states mandated the death penalty for anyone convicted of a capital crime, regardless of circumstances. With the exception of a small number of rarely committed crimes in a few jurisdictions, all mandatory capital punishment laws had been abolished by 1963. (Bohm, 1999)

    During the Civil War, opposition to the death penalty waned, as more attention was given to the anti-slavery movement. After the war, new developments in the means of executions emerged. The electric chair was introduced at the end of the century. New York built the first electric chair in 1888, and in 1890 executed William Kemmler. Soon, other states adopted this execution method. (Randa, 1997)

    Early and Mid-Twentieth Century

    Although some states abolished the death penalty in the mid-Nineteenth Century, it was actually the first half of the Twentieth Century that marked the beginning of the "Progressive Period" of reform in the United States. From 1907 to 1917, six states completely outlawed the death penalty and three limited it to the rarely committed crimes of treason and first degree murder of a law enforcement official. However, this reform was short-lived. There was a frenzied atmosphere in the U.S., as citizens began to panic about the threat of revolution in the wake of the Russian Revolution. In addition, the U.S. had just entered World War I and there were intense class conflicts as socialists mounted the first serious challenge to capitalism. As a result, five of the six abolitionist states reinstated their death penalty by 1920.(Bedau, 1997 and Bohm, 1999)

    In 1924, the use of cyanide gas was introduced, as Nevada sought a more humane way of executing its inmates. Gee Jon was the first person executed by lethal gas. The state tried to pump cyanide gas into Jon's cell while he slept, but this proved impossible, and the gas chamber was constructed. (Bohm, 1999)

    From the 1920s to the 1940s, there was a resurgence in the use of the death penalty. This was due, in part, to the writings of criminologists, who argued that the death penalty was a necessary social measure. In the United States, Americans were suffering through Prohibition and the Great Depression. There were more executions in the 1930s than in any other decade in American history, an average of 167 per year. (Bohm, 1999 and Schabas, 1997)

    In the 1950s, public sentiment began to turn away from capital punishment. Many allied nations either abolished or limited the death penalty, and in the U.S., the number of executions dropped dramatically. Whereas there were 1,289 executions in the 1940s, there were 715 in the 1950s, and the number fell even further, to only 191, from 1960 to 1976. In 1966, support for capital punishment reached an all-time low. A Gallup poll showed support for the death penalty at only 42%. (Bohm, 1999 and BJS, 1997)


    Constitutionality of the Death Penalty in America

    Challenging the Death Penalty

    The 1960s brought challenges to the fundamental legality of the death penalty. Before then, the Fifth, Eighth, and Fourteenth Amendments were interpreted as permitting the death penalty. However, in the early 1960s, it was suggested that the death penalty was a "cruel and unusual" punishment, and therefore unconstitutional under the Eighth Amendment. In 1958, the Supreme Court had decided in Trop v. Dulles (356 U.S. 86), that the Eighth Amendment contained an "evolving standard of decency that marked the progress of a maturing society." Although Trop was not a death penalty case, abolitionists applied the Court's logic to executions and maintained that the United States had, in fact, progressed to a point that its "standard of decency" should no longer tolerate the death penalty. (Bohm, 1999)

    In the late 1960s, the Supreme Court began "fine tuning" the way the death penalty was administered. To this effect, the Court heard two cases in 1968 dealing with the discretion given to the prosecutor and the jury in capital cases. The first case was U.S. v. Jackson (390 U.S. 570), where the Supreme Court heard arguments regarding a provision of the federal kidnapping statute requiring that the death penalty be imposed only upon recommendation of a jury. The Court held that this practice was unconstitutional because it encouraged defendants to waive their right to a jury trial to ensure they would not receive a death sentence.

    The other 1968 case was Witherspoon v. Illinois (391 U.S. 510). In this case, the Supreme Court held that a potential juror's mere reservations about the death penalty were insufficient grounds to prevent that person from serving on the jury in a death penalty case. Jurors could be disqualified only if prosecutors could show that the juror's attitude toward capital punishment would prevent him or her from making an impartial decision about the punishment.

    In 1971, the Supreme Court again addressed the problems associated with the role of jurors and their discretion in capital cases. The Court decided Crampton v. Ohio and McGautha v. California (consolidated under 402 U.S. 183). The defendants argued it was a violation of their Fourteenth Amendment right to due process for jurors to have unrestricted discretion in deciding whether the defendants should live or die, and such discretion resulted in arbitrary and capricious sentencing. Crampton also argued that it was unconstitutional to have his guilt and sentence determined in one set of deliberations, as the jurors in his case were instructed that a first-degree murder conviction would result in a death sentence. The Court, however, rejected these claims, thereby approving of unfettered jury discretion and a single proceeding to determine guilt and sentence. The Court stated that guiding capital sentencing discretion was "beyond present human ability."

    Suspending the Death Penalty

    The issue of arbitrariness of the death penalty was again be brought before the Supreme Court in 1972 in Furman v. Georgia, Jackson v. Georgia, and Branch v. Texas (known collectively as the landmark case Furman v. Georgia (408 U.S. 238)). Furman, like McGautha, argued that capital cases resulted in arbitrary and capricious sentencing. Furman, however, was a challenge brought under the Eighth Amendment, unlike McGautha, which was a Fourteenth Amendment due process claim. With the Furman decision the Supreme Court set the standard that a punishment would be "cruel and unusual" if it was too severe for the crime, if it was arbitrary, if it offended society's sense of justice, or it if was not more effective than a less severe penalty.

    In 9 separate opinions, and by a vote of 5 to 4, the Court held that Georgia's death penalty statute, which gave the jury complete sentencing discretion, could result in arbitrary sentencing. The Court held that the scheme of punishment under the statute was therefore "cruel and unusual" and violated the Eighth Amendment. Thus, on June 29, 1972, the Supreme Court effectively voided 40 death penalty statutes, thereby commuting the sentences of 629 death row inmates around the country and suspending the death penalty because existing statutes were no longer valid.

    Reinstating the Death Penalty

    Although the separate opinions by Justices Brennan and Marshall stated that the death penalty itself was unconstitutional, the overall holding in Furman was that the specific death penalty statutes were unconstitutional. With that holding, the Court essentially opened the door to states to rewrite their death penalty statutes to eliminate the problems cited in Furman. Advocates of capital punishment began proposing new statutes that they believed would end arbitrariness in capital sentencing. The states were led by Florida, which rewrote its death penalty statute only five months after Furman. Shortly after, 34 other states proceeded to enact new death penalty statutes. To address the unconstitutionality of unguided jury discretion, some states removed all of that discretion by mandating capital punishment for those convicted of capital crimes. However, this practice was held unconstitutional by the Supreme Court in Woodson v. North Carolina (428 U.S. 280 (1976)).

    Other states sought to limit that discretion by providing sentencing guidelines for the judge and jury when deciding whether to impose death. The guidelines allowed for the introduction of aggravating and mitigating factors in determining sentencing. These guided discretion statutes were approved in 1976 by the Supreme Court in Gregg v. Georgia (428 U.S. 153), Jurek v. Texas (428 U.S. 262), and Proffitt v. Florida (428 U.S. 242), collectively referred to as the Gregg decision. This landmark decision held that the new death penalty statutes in Florida, Georgia, and Texas were constitutional, thus reinstating the death penalty in those states. The Court also held that the death penalty itself was constitutional under the Eighth Amendment.

    In addition to sentencing guidelines, three other procedural reforms were approved by the Court in Gregg. The first was bifurcated trials, in which there are separate deliberations for the guilt and penalty phases of the trial. Only after the jury has determined that the defendant is guilty of capital murder does it decide in a second trial whether the defendant should be sentenced to death or given a lesser sentence of prison time. Another reform was the practice of automatic appellate review of convictions and sentence. The final procedural reform from Gregg was proportionality review, a practice that helps the state to identify and eliminate sentencing disparities. Through this process, the state appellate court can compare the sentence in the case being reviewed with other cases within the state, to see if it is disproportionate.

    Because these reforms were accepted by the Supreme Court, some states wishing to reinstate the death penalty included them in their new death penalty statutes. The Court, however, did not require that each of the reforms be present in the new statutes. Therefore, some of the resulting new statutes include variations on the procedural reforms found in Gregg.

    The ten-year moratorium on executions that had begun with the Jackson and Witherspoon decisions ended on January 17, 1977, with the execution of Gary Gilmore by firing squad in Utah. Gilmore did not challenge his death sentence. That same year, Oklahoma became the first state to adopt lethal injection as a means of execution, though it would be five more years until Charles Brooks became the first person executed by lethal injection in Texas on December 7, 1982.
     
  14. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    Limiting the Death Penalty

    Creation of International Human Rights Doctrines

    In the aftermath of World War II, the United Nations General Assembly adopted the Universal Declaration of Human Rights. This 1948 doctrine proclaimed a "right to life" in an absolute fashion, any limitations being only implicit. Knowing that international abolition of the death penalty was not yet a realistic goal in the years following the Universal Declaration, the United Nations shifted its focus to limiting the scope of the death penalty to protect juveniles, pregnant women, and the elderly.

    During the 1950s and 1960s subsequent international human rights treaties were drafted, including the International Covenant on Civil and Political Rights, the European Convention on Human Rights, and the American Convention on Human Rights. These documents also provided for the right to life, but included the death penalty as an exception that must be accompanied by strict procedural safeguards. Despite this exception, many nations throughout Western Europe stopped using capital punishment, even if they did not, technically, abolish it. As a result, this de facto abolition became the norm in Western Europe by the 1980s. (Schabas, 1997)

    Limitations within the United States

    Despite growing European abolition, the U.S. retained the death penalty, but established limitations on capital punishment.

    In 1977, the United States Supreme Court held in Coker v. Georgia (433 U.S. 584) that the death penalty is an unconstitutional punishment for the rape of an adult woman when the victim was not killed. Other limits to the death penalty followed in the next decade.
    •Mental Illness and Intellectual Disability
    In 1986, the Supreme Court banned the execution of insane persons and required an adversarial process for determining mental competency in Ford v. Wainwright (477 U.S. 399). In Penry v. Lynaugh (492 U.S. 584 (1989)), the Court held that executing persons with "mental retardation" was not a violation of the Eighth Amendment. However, in 2002 in Atkins v. Virginia, (536 U.S. 304), the Court held that a national consensus had evolved against the execution of the "mentally retarded" and concluded that such a punishment violates the Eighth Amendment's ban on cruel and unusual punishment.

    •Race
    Race became the focus of the criminal justice debate when the Supreme Court held in Batson v. Kentucky (476 U.S. 79 (1986)) that a prosecutor who strikes a disproportionate number of citizens of the same race in selecting a jury is required to rebut the inference of discrimination by showing neutral reasons for the strikes.
    Race was again in the forefront when the Supreme Court decided the 1987 case, McCleskey v. Kemp (481 U.S. 279). McCleskey argued that there was racial discrimination in the application of Georgia's death penalty, by presenting a statistical analysis showing a pattern of racial disparities in death sentences, based on the race of the victim. The Supreme Court held, however, that racial disparities would not be recognized as a constitutional violation of "equal protection of the law" unless intentional racial discrimination against the defendant could be shown.

    •Juveniles
    In the late 1980s, the Supreme Court decided three cases regarding the constitutionality of executing juvenile offenders. In 1988, in Thompson v. Oklahoma (487 U.S. 815), four Justices held that the execution of offenders aged fifteen and younger at the time of their crimes was unconstitutional. The fifth vote was Justice O'Connor's concurrence, which restricted Thompson only to states without a specific minimum age limit in their death penalty statute. The combined effect of the opinions by the four Justices and Justice O'Connor in Thompson is that no state without a minimum age in its death penalty statute can execute someone who was under sixteen at the time of the crime.
    The following year, the Supreme Court held that the Eighth Amendment does not prohibit the death penalty for crimes committed at age sixteen or seventeen. (Stanford v. Kentucky, and Wilkins v. Missouri (collectively, 492 U.S. 361)). At present, 19 states with the death penalty bar the execution of anyone under 18 at the time of his or her crime.

    In 1992, the United States ratified the International Covenant on Civil and Political Rights. Article 6(5) of this international human rights doctrine requires that the death penalty not be used on those who committed their crimes when they were below the age of 18. However, in doing so but the U.S. reserved the right to execute juvenile offenders. The United States is the only country with an outstanding reservation to this Article. International reaction has been highly critical of this reservation, and ten countries have filed formal objections to the U.S. reservation.

    In March 2005, Roper v. Simmons, the United States Supreme Court declared the practice of executing defendants whose crimes were committed as juveniles unconstitutional in Roper v. Simmons.


    Additional Death Penalty Issues

    Innocence

    The Supreme Court addressed the constitutionality of executing someone who claimed actual innocence in Herrera v. Collins (506 U.S. 390 (1993)). Although the Court left open the possibility that the Constitution bars the execution of someone who conclusively demonstrates that he or she is actually innocent, the Court noted that such cases would be very rare. The Court held that, in the absence of other constitutional violations, new evidence of innocence is no reason for federal courts to order a new trial. The Court also held that an innocent inmate could seek to prevent his execution through the clemency process, which, historically, has been "the 'fail safe' in our justice system." Herrera was not granted clemency, and was executed in 1993.

    Since Herrera, concern regarding the possibility of executing the innocent has grown. Currently, over 115 people in 25 states have been released from death row because of innocence since 1973. In November, 1998 Northwestern University held the first-ever National Conference on Wrongful Convictions and the Death Penalty, in Chicago, Illinois. The Conference, which drew nationwide attention, brought together 30 of these wrongfully convicted inmates who were exonerated and released from death row. Many of these cases were discovered not as the result of the justice system, but instead as the result of new scientific techniques, investigations by journalism students, and the work of volunteer attorneys. These resources are not available to the typical death row inmate.

    In January 2000, after Illinois had released 13 innocent inmates from death row in the same time that it had executed 12 people, Illinois Governor George Ryan declared a moratorium on executions and appointed a blue-ribbon Commission on Capital Punishment to study the issue.


    Public Support

    Support for the death penalty has fluctuated throughout the century. According to Gallup surveys, in 1936 61% of Americans favored the death penalty for persons convicted of murder. Support reached an all-time low of 42% in 1966. Throughout the 70s and 80s, the percentage of Americans in favor of the death penalty increased steadily, culminating in an 80% approval rating in 1994. A May 2004 Gallup Poll found that a growing number of Americans support a sentence of life without parole rather than the death penalty for those convicted of murder. Gallup found that 46% of respondents favor life imprisonment over the death penalty, up from 44% in May 2003. During that same time frame, support for capital punishment as an alternative fell from 53% to 50%. The poll also revealed a growing skepticism that the death penalty deters crime, with 62% of those polled saying that it is not a deterrent. These percentages are a dramatic shift from the responses given to this same question in 1991, when 51% of Americans believed the death penalty deterred crime and only 41% believed it did not. Only 55% of those polled responded that they believed the death penalty is implemented fairly, down from 60% in 2003. When not offered an alternative sentence, 71% supported the death penalty and 26% opposed. The overall support is about the same as that reported in 2002, but down from the 80% support in 1994. (Gallup Poll News Service, June 2, 2004). (See also, DPIC's report, Sentencing for Life: American's Embrace Alternatives to the Death Penatly)


    Religion

    In the 1970s, the National Association of Evangelicals (NAE), representing more then 10 million conservative Christians and 47 denominations, and the Moral Majority, were among the Christian groups supporting the death penalty. NAE's successor, the Christian Coalition, also supports the death penalty. Today, Fundamentalist and Pentecostal churches support the death penalty, typically on biblical grounds, specifically citing the Old Testament. (Bedau, 1997). The Church of Jesus Christ of Latter-day Saints regards the question as a matter to be decided solely by the process of civil law, and thus neither promotes nor opposes capital punishment.

    Although traditionally also a supporter of capital punishment, the Roman Catholic Church now oppose the death penalty. In addition, most Protestant denominations, including Baptists, Episcopalians, Lutherans, Methodists, Presbyterians, and the United Church of Christ, oppose the death penalty. During the 1960s, religious activists worked to abolish the death penalty, and continue to do so today.

    In recent years, and in the wake of a recent appeal by Pope John Paul II to end the death penalty, religious organizations around the nation have issued statements opposing the death penalty. Complete texts of many of these statements can be found at www.deathpenaltyreligious.org.


    Women

    Women have, historically, not been subject to the death penalty at the same rates as men. From the first woman executed in the U.S., Jane Champion, who was hanged in James City, Virginia in 1632, to the present, women have constituted only about 3% of U.S. executions. In fact, only ten women have been executed in the post-Gregg era. (Shea, 2004, with updates by DPIC).


    Recent Developments in Capital Punishment

    The Federal Death Penalty

    In addition to the death penalty laws in many states, the federal government has also employed capital punishment for certain federal offenses, such as murder of a government official, kidnapping resulting in death, running a large-scale drug enterprise, and treason. When the Supreme Court struck down state death penalty statutes in Furman, the federal death penalty statutes suffered from the same conitutional infirmities that the state statutes did. As a result, death sentences under the old federal death penalty statutes have not been upheld.

    In 1988, a new federal death penalty statute was enacted for murder in the course of a drug-kingpin conspiracy. The statute was modeled on the post-Gregg statutes that the Supreme Court has approved. Since its enactment, 6 people have been sentenced to death for violating this law, though none has been executed.

    In 1994, President Clinton signed the Violent Crime Control and Law Enforcement Act that expanded the federal death penalty to some 60 crimes, 3 of which do not involve murder. The exceptions are espionage, treason, and drug trafficking in large amounts.

    Two years later, in response to the Oklahoma City Bombing, President Clinton signed the Anti-Terrorism and Effective Death Penalty Act of 1996. The Act, which affects both state and federal prisoners, restricts review in federal courts by establishing tighter filing deadlines, limiting the opportunity for evidentiary hearings, and ordinarily allowing only a single habeas corpus filing in federal court. Proponents of the death penalty argue that this streamlining will speed up the death penalty process and significantly reduce its cost, although others fear that quicker, more limited federal review may increase the risk of executing innocent defendants. (Bohm, 1999 and Schabas, 1997)

    International Abolition

    In the 1980s the international abolition movement gained momentum and treaties proclaiming abolition were drafted and ratified. Protocol No. 6 to the European Convention on Human Rights and its successors, the Inter-American Additional Protocol to the American Convention on Human Rights to Abolish the Death Penalty, and the United Nation's Second Optional Protocol to the International Covenant on Civil and Political Rights Aiming at the Abolition of the Death Penalty, were created with the goal of making abolition of the death penalty an international norm.

    Today, the Council of Europe requires new members to undertake and ratify Protocol No. 6. This has, in effect, led to the abolition of the death penalty in Eastern Europe, where only Belarus retains the death penalty. For example, the Ukraine, formerly one of the world's leaders in executions, has now halted the death penalty and has been admitted to the Council. South Africa's parliament voted to formally abolish the death penalty, which had earlier been declared unconstitutional by the Constitutional Court. In addition, Russian President Boris Yeltsin signed a decree commuting the death sentences of all of the convicts on Russia's death row in June 1999. (Amnesty International and Schabas, 1997). Between 2000 and 2004, seven additional countries abolished the death penalty for all crimes, and four more abolished the death penalty for ordinary crimes.

    The Death Penalty Today
    In April 1999, the United Nations Human Rights Commission passed the Resolution Supporting Worldwide Moratorium On Executions. The resolution calls on countries which have not abolished the death penalty to restrict its use of the death penalty, including not imposing it on juvenile offenders and limiting the number of offenses for which it can be imposed. Ten countries, including the United States, China, Pakistan, Rwanda and Sudan voted against the resolution. (New York Times, 4/29/99). Each year since 1997, the United Nations Commission on Human Rights has passed a resolution calling on countries that have not abolished the death penalty to establish a moratorium on executions. In April 2004, the resolution was co-sponsored by 76 UN member states. (Amnesty International, 2004).

    In the United States numbers of death sentences are steadily declining from 300 in 1998 to 106 in 2009.

    Presently, more than half of the countries in the international community have abolished the death penalty completely, de facto, or for ordinary crimes. However, 58 countries retain the death penalty, including China, Iran, the United States, and Vietnam all of which rank among the highest for international executions in 2003. (Amnesty International, 2010)
     
  15. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    History of the Death Penalty by Michael h. Reggio

    As far back as the Ancient Laws of China, the death penalty has been established as a punishment for crimes. In the 18th Century BC, the Code of King Hammurabi of Babylon codified the death penalty for twenty five different crimes, although murder was not one of them. The first death sentence historically recorded occurred in 16th Century BC Egypt where the wrongdoer, a member of nobility, was accused of magic, and ordered to take his own life. During this period non-nobility was usually killed with an ax.
    In the 14th Century BC, the Hittite Code also prescribed the death penalty. The 7th Century BC Draconian Code of Athens made death the penalty for every crime committed. In the 5th Century BC, the Roman Law of the Twelve Tablets codified the death penalty. Again, the death penalty was different for nobility, freemen and slaves and was punishment for crimes such as the publication of libels and insulting songs, the cutting or grazing of crops planted by a farmer, the burning [of] a house or a stack of corn near a house, cheating by a patron of his client, perjury, making disturbances at night in the city, willful murder of a freeman or a parent, or theft by a slave. Death was often cruel and included crucifixion, drowning at sea, burial alive, beating to death, and impalement (often used by Nero). The Romans had a curious punishment for parricides (murder of a parent): the condemned was submersed in water in a sack, which also contained a dog, a rooster, a viper and an ape.[1] The most notorious death execution in BC was about 399 BC when the Greek philosopher Socrates was required to drink poison for heresy and corruption of youth.[2]

    Mosaic Law codified many capital crimes. In fact, there is evidence that Jews used many different techniques including stoning, hanging, beheading, crucifixion (copied from the Romans), throwing the criminal from a rock, and sawing asunder. Emperor Constantine, after converting to Christianity, abolished crucifixion and other cruel death penalties in the Roman Empire. In 438, the Code of Theodosius made more than 80 crimes punishable by death.

    Britain influenced the colonies more than any other country and has a long history of punishment by death. About 450 BC, the death penalty was often enforced by throwing the condemned into a quagmire. By the 10th Century, hanging from gallows was the most frequent execution method. William the Conqueror opposed taking life except in war, and ordered no person to be hanged or executed for any offense. However, he allowed criminals to be mutilated for their crimes. During the middle ages, capital punishment was accompanied by torture. Most barons had a drowning pit as well as gallows and they were used for major as well as minor crimes. For example, in 1279, two hundred and eighty nine Jews were hanged for clipping coin. Under Edward I, two gatekeepers were killed because the city gate had not been closed in time to prevent the escape of an accused murderer. Burning was the punishment for women's high treason and men were hanged, drawn and quartered. Beheading was generally accepted for the upper classes. One could be burned for marrying a Jew. Pressing became the penalty for those who would not confess to their crimes. The executioner placed heavy weights on the victim's chest. On the first day he gave the victim a small quantity of bread, on the second day a small drink of bad water, and so on until he confessed or died. Under the reign of Henry VIII, the numbers of those put to death are estimated as high as 72,000. Boiling to death was another penalty approved in 1531, and there are records to show some people boiled for up to two hours before death took them. When a woman was burned, the executioner tied a rope around her neck when she was tied to the stake. When the flames reached her she could be strangled from outside the ring of fire. However, this often failed and many were literally burnt alive.[4]

    In Britain, the number of capital offenses continually increased until the 1700's when two hundred and twenty-two crimes were punishable by death. These included stealing from a house in the amount of forty shillings, stealing from a shop the value of five shillings, robbing a rabbit warren, cutting down a tree, and counterfeiting tax stamps. However, juries tended not to convict when the penalty was great and the crime was not. Reforms began to take place. In 1823, five laws passed, exempting about a hundred crimes from the death [penalty]. Between 1832 and 1837, many capital offenses were swept away. In 1840, there was a failed attempt to abolish all capital punishment. Through the nineteenth and twentieth centuries, more and more capital punishments were abolished, not only in Britain, but also all across Europe, until today only a few European countries retain the death penalty.[5]

    The first recorded execution in the English American colonies was in 1608 when officials executed George Kendall of Virginia for supposedly plotting to betray the British to the Spanish. In 1612, Virginia's governor, Sir Thomas Dale, implemented the Divine, Moral, and Martial Laws that made death the penalty for even minor offenses such as stealing grapes, killing chickens, killing dogs or horses without permission, or trading with Indians. Seven years later these laws were softened because Virginia feared that no one would settle there.[6]

    In 1622, the first legal execution of a criminal, Daniel Frank, occurred in Virginia for the crime of theft.[7] Some colonies were very strict in their use of the death penalty, while others were less so. In Massachusetts Bay Colony the first execution was in 1630, but the earliest capital statutes do not occur until later. Under the Capital Laws of New-England that went into effect between 1636-1647 the death penalty was meted out for pre-meditated murder, sodomy, witchcraft, adultery, idolatry, blasphemy, assault in anger, rape, statutory rape, manstealing, perjury in a capital trial, rebellion, manslaughter, poisoning and bestiality. Early laws were accompanied by a scripture from the Old Testament. By 1780, the Commonwealth of Massachusetts only recognized seven capital crimes: murder, sodomy, burglary, buggery, arson, rape, and treason.[8]

    The New York colony instituted the so-called Duke's Laws of 1665. This directed the death penalty for denial of the true God, pre-meditated murder, killing someone who had no weapon of defense, killing by lying in wait or by poisoning, sodomy, buggery, kidnapping, perjury in a capital trial, traitorous denial of the king's rights or raising arms to resist his authority, conspiracy to invade towns or forts in the colony and striking one's mother or father (upon complaint of both). The two colonies that were more lenient concerning capital punishment were South Jersey and Pennsylvania. In South Jersey there was no death penalty for any crime and there were only two crimes, murder and treason, punishable by death.[9]

    However under the direction of the Crown, harsher penal codes were execution there until 1691 [sic]. In Pennsylvania, William Penn's Great Act (1682) made passed in the colonies [sic]. By 1776, most of the colonies had roughly comparable death statutes which covered arson, piracy, treason, murder, sodomy, burglary, robbery, rape, horse-stealing, slave rebellion, and often counterfeiting. Hanging was the usual sentence. Rhode Island was probably the only colony which decreased the number of capital crimes in the late 1700's.

    Some states were more severe. For example, by 1837, North Carolina required death for the crimes of murder, rape, statutory rape, slave-stealing, stealing bank notes, highway robbery, burglary, arson, castration, buggery, sodomy, bestiality, dueling where death occurs, hiding a slave with intent to free him, taking a free Negro out of state to sell him, bigamy, inciting slaves to rebel, circulating seditious literature among slaves, accessory to murder, robbery, burglary, arson, or mayhem and others. However, North Carolina did not have a state penitentiary and, many said, no suitable alternative to capital punishment.[10]

    The first reforms of the death penalty occurred between 1776-1800. Thomas Jefferson and four others, authorized to undertake a complete revision of Virginia's laws, proposed a law that recommended the death penalty for only treason and murder. After a stormy debate the legislature defeated the bill by one vote. The writing of European theorists such as Montesquieu, Voltaire, and Bentham had a great effect on American intellectuals, as did English Quaker prison reformers John Bellers and John Howard.[11]

    On Crimes and Punishment, published in English in 1767 by the Italian jurist Cesare Beccaria, whose exposition on abolishing capital punishment was the most influential of the time, had an especially strong impact. He theorized that there was no justification for the taking of life by the state. He said that the death penalty was "a war of a whole nation against a citizen, whose destruction they consider as necessary, or useful to the general good." He asked the question what if it can be shown not to be necessary or useful? His essay conceded that the only time a death was necessary was when only one's death could insure the security of a nation -- which would be rare and only in cases of absolute anarchy or when a nation was on the verge of losing its liberty. He said that the history of using punishment by death (e.g., the Romans, 20 years of Czaress Elizabeth) had not prevented determined men from injuring society and that death was only a "momentary spectacle, and therefore a less efficacious method of deterring others, than the continued example of a man deprived of his liberty...."[12]

    Organizations were formed in different colonies for the abolition of the death penalty and to relieve poor prison conditions. Dr. Benjamin Rush, a renowned Philadelphia citizen, proposed the complete abolition of capital punishment. William Bradford, Attorney General of Pennsylvania, was ordered to investigate capital punishment. In 1793 he published An Enquiry How Far the Punishment of Death is Necessary in Pennsylvania. He strongly insisted that the death penalty be retained, but admitted it was useless in preventing certain crimes. In fact, he said the death penalty made convictions harder to obtain, because in Pennsylvania, and indeed in all states, the death penalty was mandatory and juries would often not return a guilty verdict because of this fact. In response, in 1794, the Pennsylvania legislature abolished capital punishment for all crimes except murder "in the first degree," the first time murder had been broken down into "degrees." In New York, in 1796, the legislature authorized construction of the state's first penitentiary, abolished whipping, and reduced the number of capital offenses from thirteen to two. Virginia and Kentucky passed similar reform bills. Four more states reduced its capital crimes: Vermont in 1797, to three; Maryland in 1810, to four; New Hampshire in 1812, to two and Ohio in 1815, to two. Each of these states built state penitentiaries. A few states went the opposite direction. Rhode Island restored the death penalty for rape and arson; Massachusetts, New Jersey, and Connecticut raised death crimes from six to ten, including sodomy, maiming, robbery, and forgery. Many southern states made more crimes capital, especially for slaves.[13]

    The first great reform era occurred between 1833-1853. Public executions were attacked as cruel. Sometimes tens of thousands of eager viewers would show up to view hangings; local merchants would sell souvenirs and alcohol. Fighting and pushing would often break out as people jockeyed for the best view of the hanging or the corpse! Onlookers often cursed the widow or the victim and would try to tear down the scaffold or the rope for keepsakes. Violence and drunkenness often ruled towns far into the night after "justice had been served." Many states enacted laws providing private hangings. Rhode Island (1833), Pennsylvania (1834), New York (1835), Massachusetts (1835), and New Jersey (1835) all abolished public hangings. By 1849, fifteen states were holding private hangings. This move was opposed by many death penalty abolitionists who thought public executions would eventually cause people to cry out against execution itself. For example, in 1835, Maine enacted what was in effect a moratorium on capital punishment after over ten thousand people who watched a hanging had to be restrained by police after they became unruly and began fighting. All felons sentenced to death would have to remain in prison at hard labor and could not be executed until one year had elapsed and then only on the governor's order. No governor ordered an execution under the "Maine Law" for twenty-seven years. Though many states argued the merits of the death penalty, no state went as far as Maine. The most influential reformers were the clergy. Ironically, the small but powerful group which opposed the abolitionists were also clergy. They were, almost to a person, members of the Calvinist clergy, especially the Congregationalists and Presbyterians who could be called the religious establishment of the time. They were led by George Cheever.[14]

    Finally, in 1846, Michigan became the first state to abolish the death penalty (except for treason against the state), mostly because it had no long tradition of capital punishment (there had been no hanging since 1830, before statehood) and because frontier Michigan had few established religious groups to oppose it as was the case in the east. In 1852, Rhode Island abolished the death penalty led by Unitarians, Universalists, and especially Quakers. In the same year, Massachusetts limited its death penalty to first-degree murder. In 1853, Wisconsin abolished the death penalty after a gruesome execution in which the victim struggled for five minutes at the end of the rope, and a full eighteen minutes passed before his heart finally quit.[15]

    During the last half of the century the death penalty abolition movement ground to a half, with many members moving into the slavery abolition movement. At the same time, states began to pass laws against mandatory death sentences. Legislators in eighteen states shifted from mandatory to discretionary capital punishment by 1895, not to save lives, but to try to increase convictions and executions of murderers. Still, abolitionists gained a few victories. Maine abolished the death penalty, restored it, and then abolished it again between 1876-1887. Iowa abolished the death penalty for six years. Kansas passed a "Maine Law" in 1872 which operated as de facto abolition.[16]

    Electrocution as a method of execution came onto the scene in an unlikely manner. Edison Company with its DC (direct current) electrical systems began attacking Westinghouse Company and its AC (alternating current) electrical systems as they were pressing for nationwide electrification with alternating current. To show how dangerous AC could be, Edison Company began public demonstrations by electrocuting animals. People reasoned that if electricity could kill animals, it could kill people. In 1888, New York approved the dismantling of its gallows and the building of the nation's first electric chair. It held its first victim, William Kemmler, in 1890, and even though the first electrocution was clumsy at best, other states soon followed the lead.[17]

    The Second Great Reform era was 1895-1917. In 1897, U.S. Congress passed a bill reducing the number of federal death crimes. In 1907, Kansas took the "Maine Law" a step further and abolished all death penalties. Between 1911 and 1917, eight more states abolished capital punishment (Minnesota, North Dakota, South Dakota, Oregon, Arizona, Missouri and Tennessee -- the latter in all cases but rape). Votes in other states came close to ending the death penalty.

    However, between 1917 and 1955, the death penalty abolition movement again slowed. Washington, Arizona, and Oregon in 1919-20 reinstated the death penalty. In 1924, the first execution by cyanide gas took place in Nevada, when Tong war gang murderer Gee Jon became its first victim. The state wanted to secretly pump cyanide gas into Jon's cell at night while he was asleep as a more humanitarian way of carrying out the penalty, but, technical difficulties prohibited this and a special "gas chamber" was hastily built. Other concerns developed when less "civilized" methods of execution failed. In 1930, Mrs. Eva Dugan became the first female to be executed by Arizona. The execution was botched when the hangman misjudged the drop and Mrs. Dugan's head was ripped from her body. More states converted to electric chairs and gas chambers. During this period of time, abolitionist organizations sprang up all across the country, but they had little effect. There were a number of stormy protests against the execution of certain convicted felons (e.g., Julius and Ethel Rosenberg), but little opposition against the death penalty itself. In fact, during the anti-Communist period with all its fears and hysteria, Texas Governor Allan Shivers seriously suggested that capital punishment be the penalty for membership in the Communist Party.[18]

    The movement against capital punishment revived again between 1955 and 1972.

    England and Canada completed exhaustive studies which were largely critical of the death penalty and these were widely circulated in the U.S. Death row criminals gave their own moving accounts of capital punishment in books and film. Convicted kidnapper Caryl Chessman published Cell 2455 Death Row and Trial by Ordeal. Barbara Graham's story was utilized in book and film with I Want to Live! after her execution. Television shows were broadcast on the death penalty. Hawaii and Alaska ended capital punishment in 1957, and Delaware did so the next year. Controversy over the death penalty gripped the nation, forcing politicians to take sides. Delaware restored the death penalty in 1961. Michigan abolished capital punishment for treason in 1963. Voters in 1964 abolished the death penalty in Oregon. In 1965 Iowa, New York, West Virginia, and Vermont ended the death penalty. New Mexico abolished the death penalty in 1969.[19]

    Trying to end capital punishment state-by-state was difficult at best, so death penalty abolitionists turned much of their efforts to the courts. They finally succeeded on June 29, 1972 in the case Furman v. Georgia. In nine separate opinions, but with a majority of 5-4, the U.S. Supreme Court ruled that the way capital punishment laws were written, including discriminatory sentencing guidelines, capital punishment was cruel and unusual and violated the Eighth and Fourteenth Amendments. This effectively ended capital punishment in the United States. Advocates of capital punishment began proposing new capital statutes which they believed would end discrimination in capital sentencing, therefore satisfying a majority of the Court. death rowBy early 1975, thirty states had again passed death penalty laws and nearly two hundred prisoners were on death row. In Gregg v. Georgia (1976), the Supreme Court upheld Georgia's newly passed death penalty and said that the death penalty was not always cruel and unusual punishment. Death row executions could again begin. Another form of execution was soon found. Oklahoma passed the first death by lethal injection law, based on economics as much as humanitarian reasons. The old electric chair that had not been used in eleven years would require expensive repairs. Estimates of over $200,000 were given to build a gas chamber, while lethal injection would cost no more than ten to fifteen dollars "per event."[20]

    The controversy over the death penalty continues today. There is a strong movement against lawlessness propelled by citizens' fears for their security. Politicians at the national and state levels are taking the floor of legislatures and calling for more frequent death penalties, death penalties penalty [sic] for more crimes, and longer prison sentences. Those opposing these moves counter by arguing that tougher sentences do not slow crime and that crime is little or no worse than in the past. In fact, FBI statistics show murders are now up. (For example 9.3 persons per 100,000 population were murdered in 1973 and 9.4 persons per 100,000 were murdered in 1992). The battle lines are still drawn and the combat will probably always be fought.[21]

    A number of important capital punishment decisions have been made by the Supreme Court. The following is a list of the more important ones along with their legal citations:

    Wilkerson v. Utah 99 U.S. 130 (1878) -- Court upheld execution by firing squad, but said that other types of torture such as "drawing and quartering, embowelling alive, beheading, public dissection, and burring alive and all other in the same line of...cruelty, are forbidden."

    Weems v. U.S. 217 U.S. 349 (1910) -- Court held that what constitutes cruel and unusual punishment had not been decided, but that it should not be confined to the "forms of evil" that framers of the Bill of Rights had experienced. Therefore, "cruel and unusual" definitions are subject to changing interpretations.

    Louisiana ex rel. Francis v. Resweber 329 U.S. 459 (1947) -- On May 3, 1946, convicted seventeen year old felon Willie Francis was placed in the electric chair and the switch was thrown. Due to faulty equipment, he survived (even though he was severely shocked), was removed from the chair and returned to his cell. A new death warrant was issued six days later. The Court ruled 5-4 that it was not "cruel and unusual" to finish carrying out the sentence since the state acted in good faith in the first attempt. "The cruelty against which the Constitution protects a convicted man is cruelty inherent in the method of punishment," said the Court, "not the necessary suffering involved in any method employed to extinguish life humanely." He was then executed.

    Tropp v. Dulles 356 U.S. 86 (1958) -- The Court Ruled that punishment would be considered "cruel and unusual" if it was one of "tormenting severity," cruel in its excessiveness or unusual in punishment "must draw its meaning from the evolving standards of decency that mark the progress of a maturing society."

    Furman v. Georgia 408 U.S. 238 (1972) -- The Court looking at three cases struck down the death penalty in many states and set up the standard that punishment would be considered "cruel and unusual" if any of the following were present: 1) it was too severe for the crime; 2) it was arbitrary (some get the punishment and others do not, without guidelines); 3) it offends society's sense of justice; 4) it was not more effective than a less severe penalty.

    Gregg v. Georgia 428 U.S. 153 (1976) -- [The] Court upheld Georgia's newly passed death penalty and said that the death penalty was not always cruel and unusual punishment.

    Tison v. Arizona 481 U.S. 137 (1987) -- [The] Court upheld Arizona's death penalty for major participation in a felony with "reckless indifference to human life."

    Thompson v. Oklahoma 108 S. Ct. 2687 (1987) -- The Court considered the question of execution of minors under the age of 16 at the time of the murder. The victim was the brother-in-law, who he accused of beating his sister. He and three others beat the victim, shot him twice, cut his throat, chest, and abdomen, chained him to a concrete block and threw the body into a river where it remained for four weeks. Each of the four participants were tried separately and all were sentenced to death. In a 5-3 decision, four Justices ruled that Thompson's death sentence was cruel and unusual. The fifth, O'Connor, concurred but noted that a state must set a minimum age and held out the possibility that if a state lowers, by statute, the minimum death penalty age below sixteen, she might support it. She stated, "Although, I believe that a national consensus forbidding the execution of any person for a crime committed before the age of 16 very likely does exist, I am reluctant to adopt this conclusion as a matter of constitutional law without better evidence that [sic] we now possess." States with no minimum age have rushed to specify a statute age.

    Penry v. Lynaugh 492 U.S. [sic] (1989) -- [The] Court held that persons considered retarded, but legally sane, could receive the death penalty. It was not cruel and unusual punishment under the Eighth Amendment if jurors were given the opportunity to consider mitigating circumstances. In this case, the defendant had the mental age of approximately a six-year old child.
     
  16. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    At One Point in Time, up to 50% of US Dollar Bills Were Estimated to be Counterfeit

    The year was 1865, and just months earlier the US Civil War had ended with the surrender of the Confederacy. With the war over, the problem of counterfeiting had become rampant, with one third to one half of all American currency judged to be fake.

    The reasons for this were twofold: first, spending due to the war had spiraled out of control, and second, the country had just shifted over to a uniform paper currency as opposed to the earlier system which had more than a thousand government-sanctioned banks handing out literally 10,000 different types of currency. With people unfamiliar with the new bills and thinking that this would fix counterfeiting once and for all, Americans were too trusting of anything that looked like the new federal currency.

    To alleviate the problem of counterfeiting, the US Secret Service was established as a new division of the US Treasury. Interestingly enough, some of the first agents hired to track down counterfeiters were men who had been imprisoned for forgeries themselves.

    Ironically, during the war, the Union had encouraged counterfeiting of Confederate money in hopes that by flooding the market with large amounts of it, it would decrease its value. Doubly ironically, the Union-created counterfeits were often higher quality than the originals.


    Tug of War Used to be an Olympic Sport

    Although nowadays it’s better known as a game that kids play at summer camp, tug of war used to be an actual Olympic sport, with ten countries having competed for the coveted gold medal in the early 20th century.

    The rules were simple: drag the rope in your team’s direction far enough and you won. The number of team members for each game varied slightly, with each team consisting of between five and eight members for each Olympics. Teams would wear matching uniforms for the competition, which was considered part of the track and field games.

    The sport has been contested at five Summer Olympics from 1900 to 1920, although only a handful of countries competed each time. Still, they kept including it (although today it likely would have been canceled due to lack of participation). Great Britain won the most cumulative medals, with five in all throughout the years.

    Like any other game, tug of war involved a definite strategy. In 1908, the American team boycotted the competition, accusing their British counterparts of cheating– by using boots with spikes on them to dig into the ground.


    Disobeying Direct Orders, British & German Soldiers Forged Their Own Christmas Truce

    Although they were officially still at war, many British and German soldiers disobeyed orders and had their own impromptu truce on Christmas 1914. The two groups of soldiers, who had for months been cooped up in their respective trenches in the freezing cold, climbed up on the battlefield without their weapons, and met each other.

    The Germans and British sang carols together, exchanged small gifts, drank, and even engaged in games of soccer (football).

    Please Register or Log in to view the hidden image!



    Soldiers from both sides also used the opportunity to collect their dead who lay strewn on the battlefield and bury them.

    The truce occurred after then-Pope Benedict XV had called for one earlier that year. But higher-ups on both sides disagreed with his suggestion, and opposed the truce.

    British commander Gen. Sir Horace Smith-Dorrien had this to say of the truce: “To finish this war quickly, we must keep up the fighting spirit and do all we can to discourage friendly intercourse.” Similarly, said one young corporal Adolf Hitler of the German army, “Such things should not happen in wartime. Have you Germans no sense of honor left at all? ”

    Such official outrage didn’t stop the troops in future years however when smaller truces occurred, though commanders ordered artillery strikes be carried out on Christmas Eve in hopes that it would stop opposing soldiers from getting to know each other.


    Lysol Used to be Advertised as a Feminine Hygiene Product and Birth Control

    Yes, the disinfectant more commonly known today as a toilet bowl cleaner, was once suggested for vaginal use. Talk about versatile!

    Although it was always intended for household cleaning, from the 1920s up until the ’60s, Lysol was largely marketed for personal bodily use, rather than disinfecting doorknobs or coffee tables like we see in today’s advertising for the product. Ads suggested that women use the cleaner as a douche fluid for everyday cleaning, and even as a form of birth control for use directly after sex (the disinfectant would kill sperm, the advertising suggested).

    Please Register or Log in to view the hidden image!



    According to Lysol ads of the day, this use of the product was endorsed by European doctors, who, it was later revealed by the American Medical Association, did not actually exist.

    According to the 2002 book Devices and Desires: A History of Contraceptives in America, “By 1940, the commercial douche had become the most popular birth control method in the country, favored by women of all classes. It would remain the leading female contraceptive until 1960, when a breakthrough technology– oral contraceptives– knocked it off its lofty pedestal… [T]he most popular brand, Lysol disinfectant, were soap solutions containing cresol… which, when used in too high a concentration, caused severe inflammation, burning, and even death.”

    Devices and Desires further states that “Lysol was a caustic poison and in more concentrated form was retailed with a prominent skull-and-crossbones icon. Ingested, it could kill; applied externally, it irritated and burned. Lehn & Fink sold it for feminine hygiene anyway, ignoring a recommendation made by the 1912 Council on Pharmacy and Chemistry of the AMA…”

    Several women reportedly died after using the product as directed. The worst part? It turned out, Lysol didn’t even work as a contraceptive at all: a 1933 study showed that 250 out of 507 women using the disinfectant got pregnant, probably about the same number who would have using no birth control at all.
     
  17. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    What is a dude?

    How the strange history of the ‘dude’ helps throw a light on why the West still feels like the real America

    by Anne Helen Petersen

    The best part about a dude ranch takes place behind the scenes. Having been up since dawn collecting the horses for the day’s rides, the wranglers eat breakfast. Their plates are heaping, and the portions fit for a teenage boy, which several of them are. And they put Tabasco on everything – eggs, sure; hashbrowns, of course; but pancakes also get a liberal sprinkling.

    When the wranglers first walk into the mess hall, they’re rowdy, boisterous, the sounds of their boots like a steady downbeat. They push each other through the line, chiding anyone who takes too long dishing up, but when everyone sits down and begins to eat, a near-silence overtakes the room. You can smell the land coming off these dozen still-sleepy-eyed men and women, a mix of hay and soil and what I can only describe as the scent of early mountain morning.

    Watching the wranglers eat is what I remember most vividly from my time on a dude ranch in the Wyoming mountains, but I was always observing from across the room or down the table, where the rest of the ‘help’ sat working our way through fruit salad, cottage cheese and cereal. We might have been up since the same dawn as the wranglers, but our work created a much smaller hunger. We smelled like the breakfasts we had laid out; they smelled like the West.

    The ranch where I worked that summer was a ‘guest ranch’, one of hundreds that dot the mountains of the American West and Southwest. In the 19th and early 20th century, working ranches would take on guests from the East during the tourist season as a source of supplementary income; in those day they were still raising and slaughtering cattle or sheep as their primary business, with just a little hospitality on the side.

    Why would genteel Easterners have paid to forgo the comforts of home? For much the same reason that we go to the West today: a longing to escape modernity. Modernity at the turn of the 20th century looked more like trolleys and telegrams than the traffic jams and cell phones of today, but the impulse was the same: go west, and find escape – a return to a bygone way of life and, by extension, a return to the essential America.

    This early onset nostalgia was an outcome of Frederick’s Jackson Turner’s now-famous ‘Frontier Thesis’, delivered in 1893 and popularised in the years to come, which made two influential arguments: that American democracy was predicated on the presence and civilising effect of the ‘frontier’ and that, as of the 1890 census, the frontier was ‘closed’. In other words, the very thing that had structured the development of the American ethos was disappearing: see it, preserve it, experience it while you can.

    The continued spread of the National Park system, the cultivation and success of the Teddy Roosevelt cowboy persona, the rise of the Western in films such as The Great Train Robbery (1903) and dime novels (think Zane Grey), and the massively successful ‘See America First’ tourism campaign can all be viewed as reactions to Turner’s thesis, as can the spread of the dude ranch industry, which formalised itself with the 1926 creation of the Dude Ranchers’ Association, or DRA.

    According to Columbus State geographer Amanda Rees, the DRA laboured to regulate the structure, image, and even the guest ‘quality’ of the dude ranches. Following sharp declines in the price of beef after the First World War, many traditional ranchers incorporated and accentuated the mythology of the West that had percolated around them for decades. As the Western writer Maxwell Struthers Burt explained in 1924, the dude ranch was ‘an ordinary ranch amplified’. These ranches, argues Rees, promoted isolation ‘as an asset rather than a liability’ while creating an exclusive and intimate ‘dude experience’ that included the sort of hands-on activity that would make a guest feel like he was part of the Old West – coupled with the service that wealthy Easterners would expect. Put differently, the wranglers and guides always acted the part of Westerner as friend, but a friend who was in service to the well-paying Dude.

    The DRA insisted that the term ‘dude’ was used with affection – a change from its original use, in the 1880s, when, according to the OED, it was slang for a ‘new kind of American man’ who ‘affects English dress and the English drawl’. A man of ‘exaggerated fastidiousness… Given to ridicule… A swell.’ Apply the concept to the West, and you have the man who purchases a full Western get-up from Abercrombie & Fitch (originally a Western outfitter) and fancies himself a cowboy. A dude, then, is a fake but a fake with means.

    Those means were crucial to the survival of the industry, and the ranches worked hard to cultivate a very specific sort of clientele, distinguishing ‘guests’ from the swarms of ‘tourists’ who clogged the trains and national parks. Many ranches required a letter of reference from a previous guest; it went unspoken that Jews and non-white races were unwelcome. The real ‘prizes’, vaunted in DRA newsletters, were royalty and other dignitaries from Europe, which further distinguished the dude ranch from the typical Western attraction.

    Over the course of the 20th century, the dude ranch became a fixture in the mythology it was helping to propagate. Today, the hundred-plus DRA lives on, trumpeting the four Hs of dude ranching (‘Horses, Hats, History, and Hospitality’) and advising potential dudes that ‘There is a little cowboy in all of us . . . come find yours today.’ That’s the call I heard when, as a senior in college, I decided I wanted my first experience out of school to be something that returned me to my Western roots.

    Which is to say that I wanted to return to a West that I hadn’t ever really known. Everything about the dude ranch seemed clean and palatable, whereas my Northern Idaho town was filled with double-wide trailers, rusted-out trucks sinking into the ground, and men whose Copenhagen snuff containers had worn white rings on the back pockets of their Wranglers.

    Even that West wasn’t really mine. My kindergarten class pictures feature my beloved pair of cowboy boots, but my parents were from the Midwest, really just playing dress-up with the accoutrements of Western culture. Even as I learned to refer to all vehicles as ‘rigs’ and how to count points on the racks of antlers that filled the walls of my friends’ homes, I was like a junior anthropologist, storing away observations for later use.

    When I went to college, I became the closest that my new city friends had to a rural informant. Those details came in handy as I could tell stories about the parking lot reserved for ‘hicks’ and their massive trucks; the guys who’d spit their tobacco juice directly onto the classroom floor; the week we got off school for the beginning of hunting season; the rodeo team. To some extent, we all turn our lives into a series of anecdotes, but I was telling stories of a life that had, in most ways, rejected me: I drove a 1990 Subaru and was known around school as a ‘ball-buster’. I wasn’t asked to Prom.

    What do we do to the things that don’t want us? We fetishise them, turning them into objects of desire, devoting ourselves, often masochistically, to the impossible task of obtaining that which rejects us. That’s exactly what I did to that idea of the West in the stories I told to my college friends, to my boyfriend, to my French host mother while studying abroad. I fetishised it so thoroughly that when it came time to think about what I wanted to do in the months following graduation, I decided to return to the source.

    I wanted the dude ranch: something rugged and remote, some place where the beer weight accumulated in the last months of college would slough off me as I woke before dawn to stoke the fire and wash my face in the mountain stream: the raw materials of which the most impressive anecdotes were made.

    I found a dozen of these ranches clustered around the Tetons, the jagged mountain range in the Northwest corner of Wyoming named, according to lore, by a French-Canadian trapper who saw the image of a woman’s ‘tetons’ in the landscape before him. Teton National Park skirts the town of Jackson, the sort of Wild West town reverse-engineered for maximum tourist enjoyment. It has faux-Western store fronts made of new wood painted to look old, a Million Dollar Cowboy Bar, and a town square fenced with tangles of old antlers, where actors stage a shootout at high noon every Saturday. There was a Thai restaurant and a spa, five-star hotels, and the home of the then vice president, Dick Cheney.

    There were also remnants of a different time and vibe: hotels from the 1970s, old pancake houses. These places stayed in business by catering to the middle-class families that arrived in the National Park by trailer or car, laden with massive coolers packed with cold-cuts and generic soda. These were the descendants of the tourists the original dude ranchers had worked so hard to avoid. They devoured the West from the windows of their vehicles, making an interminable cavalcade that inched through the national park like a constipated snake.

    As before, the tourist isn’t welcome at the dude ranch. Dudes don’t arrive by family car. Instead, they arrive in Jackson via the airport, where they are picked up by eager ranch employees who shuttle them back to the ranch in air-conditioned comfort, a chilled bottle of water by every seat.

    All mountain-based dude ranches have a few things in common: horses, of course; wranglers to wrangle them, certainly; but also a catchy name, almost always associated with the ‘brand’ of the ranch – literally, the unique symbol of the ranch in the form of a molded piece of iron that’s shaped, heated, and applied to the flesh of animals to indicate ownership.

    Brands were used to keep one man’s cattle and horses from another man’s, even when the two shared grazing grounds. But modern dude ranches don’t raise cattle, and the horses they keep are used for lackadaisical trailrides, not transport and cattle work. So the livestock brand morphs readily into a contemporary capitalist brand: the symbol embroidered on the denim shirts for sale in the giftshop, just $69.99 a piece. Circle R, T-Cross, Heart Six: the iterations are endless.

    My ranch, the Triangle C, was nested in a crook in the Wind River, an hour outside of Jackson on the way to the town of Dubois. A century earlier, it had been the Tie Hack camp, where hundreds of Scandinavian immigrants had felled the trees or ‘tied the hacks’ that would become the railroad that would, decades later, be replaced by the highway. Their old cabins – riddled with gunshot, supposedly from the long tedium of the Wyoming winter – still stood, with ramshackle retrofitted windows looking out onto the river below. Some even had bathrooms – spider-filled, mouldering bathrooms – and that’s where we staff stayed, bunked in, two to three.

    From afar, the staff cabins looked rustic and quaint. But there was a reason they were located half a mile from the guest cabins, which were luxurious in the unique manner that only multiple animal skins and heads on the wall can convey. The guest bedrooms were filled with ‘dude ranch vernacular’: huge log beds; expensive, scratchy Pendleton blankets; bad landscape paintings of cowboys riding into the sunset; and framed signs kindly requesting guests to ‘Take a Load Off, Pardner’. Shelves and coffee tables were filled with old licence plates and rusted-out farm gear; when I asked the head housekeeper where they came from, she guffawed something that sounded a lot like ‘eBay’.

    I know what the insides of the guest cabins looked like because it was my job to clean them. Also: do the laundry, prep the meals, work as a waitress, and be an all-around amiable young woman at home in the West. A dress code stipulated that we must, at all times, wear a branded Triangle C button-down shirt, all straight from 1995, and ‘Western Wear’, which usually took the form of tight jeans, a belt buckle, and a handkerchief tied around my hair.

    Along with the other four girls between the ages of 18 and 22 on staff, I woke at 5:45am, prepped and served breakfast, cleaned cabins while the guests were out on trail rides, prepped lunch, took a nap, swam in the river, prepped and served dinner, and often fell into bed by 9pm. The family that owned the ranch was Mormon, which meant no alcohol, save the expensive wine for guests. We were 20 minutes from Dubois, which offered squaredancing on Tuesday nights, but you had to open three horse gates just to get your car out of the pasture. The inertia towards the pillow was strong.

    The wranglers, however, had a far sexier day – at least it looked that way to me. Their breakfasting, still so vivid in my mind today, was like an outtake from one of hundreds of Western B-movies. And unlike us college student interlopers, these kids were the real deal. One was a rodeo champion from western Nebraska who arrived in a massive truck, paid for with roping winnings, towing her prize gelding. Another, raised outside of a small Wyoming town the very inverse of Jackson, had a face like Gary Cooper and a body made for chaps. He had no clue and perfect manners, which also meant that he always received the best tips. And then there were the owners’ daughters: four young women with perfect hair and teeth and skin. They looked great in cowboy hats and sang in harmony at the weekly cook-outs.

    Performance was in their blood. Their father ran the ranch, but their mother ran the Jackson Hole Playhouse, which operated as a sort of dinner-theatre-meets-community-musical: lots of Paint Your Wagon and Oklahoma! Each of the eight golden-haired children rotated between the playhouse and the Triangle C, with only slight variations on the sort of beautiful Westernness they performed on a given night. They were like Von Trapps, only with more twang and a religion they deftly elided around guests.

    I might have been the one changing the guests’ sheets and refilling their water glasses, but the wranglers were the real point of Western contact: pairing them with horses who had names like Steely and Bucket, complimenting their mediocre riding abilities and, on the day-long trail rides, regaling them with tales of 20-foot snow drifts, moose that knocked on cabin doors, and why rotten mayonnaise is a key ingredient in bear-bait.

    Please Register or Log in to view the hidden image!



    Anyone could pitch a tent on the Wind River, or with a view of the Tetons, but it was the wranglers who gave dudes an experience of the West. It certainly wasn’t me or my Dartmouth roommate. We’d been attracted to the summer job for the same reason the guests were: to come back with an experience, a set of stories to wield at cocktail hours. To distinguish ourselves from others and their vacations. To experience authenticity and, by extension, authenticate ourselves.

    The wranglers might have been amplifying their performance of Westernness, but they didn’t take these jobs for the experience. They took them for the chance to get paid to ride horses and, I’d imagine, for the amazing tips. This became clear to me every time one of them teased me for taking pictures at the weekly all-ranch rodeo or a particularly gorgeous sunset over the river. Like the Dudes, I was desperate for evidence of my time on the ranch: for proof that I had been there, that this life had been my own, even if only for three short months before returning to the urban sprawl. For them, this was just life, and people seldom take pictures of the things they feel no need to prove.

    My favourite usage of ‘dude’ is from Saskatchewan’s Prince Albert Times, published in July 1893: ‘The dude is one of those creatures which are perfectly harmless and are a necessary evil to civilization.’ If, circa 1926, the frontier had been closed for 30 years, then the dude – and his or her desire for an experience that no longer exist – animated what little remained of that former civilisation. As the ability of Westerners to make a living off the land itself, rather than the mythology of it, erodes further, the dude animates the West still.

    We go to the West to feel powerful – to feel masculine, whether we identify as male or female – but the only way we can do so is by making the West itself into a passive object: mystical, beautiful, and subject to our gaze. We spend a week rebuilding our flattened libidos – on the trail, on horseback, on the top of a mountain – then return to the East, the urban space, wherever that is not West, restored. Ready, once again, to be robust Americans.

    The problem, of course, is the ethos of domination, exploitation, and wilful blindness that accompanies that relationship. We see the West for the babbling brooks and mountains and trees, not the uncontrollable wildfires, the unencumbered spread of beetle kill, the copper and silver mines transformed into superfund sites. Americans see the West's usefulness in maintaining some semblance of national identity and prefer not to see the wounds it has sustained for serving that role.

    The Dude Ranch is one of the only ways for many Westerners, especially those committed to staying on the ranch, to survive. But whether or not the dude – and others, like myself – are ‘perfectly harmless’ in our desire to fetishise that way of life... that’s a question best-suited to someone who actually knew how to wear a pair of chaps, not just write about those who wore them well.

    12 June 2014
     
  18. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    Sunday, Dec 22, 2013 01:00 PM CST

    When beasts were people: The long, strange history of animals in court
    Modern law scoffs at the suggestion that animals have legal rights. That wasn't always the case
    Ben Schreckinger

    Please Register or Log in to view the hidden image!



    A funny thing happened in some New York courts this month. Lawyers filed writs of habeas corpus with three judges seeking to end the allegedly illegal detention of their clients. It would have been unremarkable if the plaintiffs — Tommy, Kiko, Hercules and Leo — hadn’t all been chimpanzees.

    Because they were, the writs made headlines around the world. The judges all dismissed the lawsuits, but the animal rights activists who brought them have vowed to wage a protracted legal campaign. To supporters, the move is a bold step forward for animal justice. To detractors, it’s worse than bad legal theory — it’s just plain bananas. Either way, the prospect of a chimp filing suit in court has been greeted as a total novelty. But , in fact, the idea of treating animals as people before the law isn’t new, it’s just long forgotten.

    The notion of animals taking their human captors to court is at least a millennium old, first set down in writing by a brotherhood of dissident Sunni scholars in 10th-century Iraq. More strikingly, the actual practice of trying animals as defendants in court dates back to at least ancient Athens, and it was common practice in Europe into the 18th century. The history of animals on and at trial, in both thought and deed, is more than just a curiosity. It serves as a reminder that human attitudes toward animals are anything but fixed, and that the current legal paradigm, in which animals are property, is as much a historical curiosity as older systems for treating them.

    For hundreds of years, European animals were regularly tried in court for their alleged misdeeds. Ecclesiastical courts went after rodents and other pests for damaging crops. The locusts, serpents, weevils, rats and flies tried before these courts weren’t just liable to damages and banishment, but also excommunication. Civil criminal courts, meanwhile, tried livestock for violence against human victims. Judging by surviving records, pigs were particularly fond of killing babies, and at least one such pig was dressed in human clothing for her 1386 hanging — raising the question of whether something can be “adorably macabre.” And of course there were the bestiality trials, in which both parties to the coupling faced prosecution.

    The authoritative source on the subject is E.P. Evans’ 1906 tome, “The Criminal Prosecution and Capital Punishment of Animals.” It’s filled with countless unlikely tales, like that of the 16th-century French jurist Barthélemy de Chasseneuz, who showed early signs of brilliance by his defense of a group of rats who were accused of eating a barley crop. When the rats failed to show up for their first court date, he argued that the summons had not been issued widely enough. After a new summons was issued from all the pulpits in the region, and the rats failed to show up for a second court date, de Chasseneuz justified their absence “on the ground of the length and difficulty of the journey and the serious perils which attended it, owing to the unwearied vigilance of their mortal enemies, the cats.”

    The trials were serious affairs, and the four-, six- and non-legged defendants were treated about as well as humans. They were afforded due process, given counsel at the public’s expense, and sometimes even vindicated. In 1750, a French she-ass was caught in flagrante with one Jacques Ferron. At trial, the prior of the local convent and other citizens attested that, having known the she-ass for four years, “they were willing to bear witness that she is in word and deed and in all her habits of life a most honest creature.” The donkey walked; Ferron hanged. When a French sow murdered 5-year-old Jehan Martin, her six piglets were charged as accomplices, having been found at the scene covered in blood. But, for lack of evidence, they escaped conviction.

    Sometimes, animals fared better than humans. In 13th-century Burgundy, horses and oxen who killed people were sold to new owners. Jews, meanwhile, were hung by their hind legs. It’s an alarming reminder that legal categories like white, male and Christian were once as fundamental as human is today; in Shakespeare’s “The Merchant of Venice,” Gratiano accuses Shylock of being infused with the soul of a wolf who had been hanged for murder.

    With the distance of centuries, stories of animal trials make for good entertainment, but they also offer a window into a different mindset. It seems that these medieval villagers and jurists were at once more attuned to animals and more confused about them than we are today. Living side by side with livestock, they understood that cows and pigs are capable of personality and individuality, a truth that’s lost on the modern world, where livestock are treated as commodities. But the trials also betray a naïve pre-Enlightenment anthropomorphism, imputing to animals a humanistic moral agency. It was always the animals that were considered at fault for their actions, not their owners, who sometimes even received compensation when their livestock were executed.

    The medieval legal treatment of animals reflects at least in part an intuitive desire to treat them fairly. But the fact that they enjoyed rights only as defendants reflects also a Christian belief in a spiritual distinction between humans and animals. Channeling Aristotle, Thomas Aquinas concluded that unlike man, animals have no immortal soul. Though very few medieval Europeans would have direct knowledge of Aquinas, it’s likely in part due to his influence that the idea of bringing charges against humans on behalf of animal plaintiffs did not occur to them.

    It did occur, though, to Ikhwan al-Safa’, or the Brethren of Purity. The Brethren were a secret society of freethinking Muslim philosophers who lived in Basra (modern-day Iraq) during the 10th century. They produced an encyclopedia with 52 treatises. One of those treatises, “The Animals’ Lawsuit Against Humanity,” contains some striking parallels to the lawsuits filed this month.

    In it, animals want freedom from the dominion of men, and they seek redress in the court of the king of the genies. The men contend that “[a]ll beast are our slaves, and we are their masters.” That is essentially the position of the American legal system, that animals are property.

    In the Brethren’s treatise, the animals detail the great injustices they’ve suffered at the hands of men. The men muster all sorts of arguments in defense of their behavior. They cite the Koran and the Bible to argue that animals are made to serve humans and appeal to the virtue of humans and the variety of their accomplishments. But the animals beat back their arguments, taking particular care to delineate the faults of all categories of humans, including astrologers, lawyers and merchants. (They reserve their harshest scorn for writers, “for in the whole world there are no men more villainous and rascally.”) The animals are so convincing that, at one point, the king of the genies makes preparations to compensate the men for the imminent loss of the animals.

    Only at the very end do the men come up with a convincing defense for themselves: that humans are superior to animals because they have the ability to dwell eternally in paradise if they are obedient to God, and that even “if we are sinners, and we do not obey him, still our salvation will be effected through the intercession of the prophets.”

    For the men, it’s a deus ex bestia, God from the beast. Bowing to this logic, the king of the genies rules, “Let all animals be submissive and obedient to man, and let none depart from their allegiance.” Even the plaintiffs were sold, for “[t]he animals consented, and being satisfied, they all returned to security and peace in their homes.”

    And there we have the state of thinking a thousand years ago on the question of whether animals should be slaves of humanity or be given the right to their freedom: Humans’ treatment of animals would be indefensible if it weren’t for the fact that God had granted humans a favored status. All of their other defenses fall short, and they are only saved by appeal to an immutable divine truth. It’s a defense that would have held up with Thomas Aquinas, had he considered the same question.

    But times have changed, and the assertion that God has simply granted homo sapiens a special exalted status no longer makes for a great legal argument. The theory of evolution long ago undermined the certainty that humans are fundamentally distinct from other animals. Recent science points us to the conclusion that many of them are more similar to us than we’d thought — something our medieval forebears apparently intuited.

    The notion of granting animals legal personhood and formal rights may strike us as radical, but considering that the law ought to reflect science and a society’s philosophy, it’s actually long overdue. In 1792, the English philosopher Thomas Taylor wrote “A Vindication of the Rights of Brutes.” It was a satire, in response to Thomas Paine’s “Rights of Man” and Mary Wollstonecraft’s “A Vindication of the Rights of Woman,” those cornerstone expressions of humanistic enlightenment values. Taylor hoped to show that extending rights to all men and to women was comparable to extending rights to animals. As it turns out, he was right; just a few centuries ahead of — or perhaps behind — his time.


    Ben Schreckinger is a writer based in Boston. Follow him on Twitter @SchreckReports
     
  19. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    Middle Ages
    1327: Edward II of England, after being deposed and imprisoned by his wife Isabella and her lover Roger Mortimer, was rumoured to have been murdered by having a horn pushed into his anus through which a red-hot iron was inserted, burning out his internal organs without marking his body.[23][24] However there is no real academic consensus on the manner of Edward II's death and it has been plausibly argued that the story is propaganda.[25]

    Renaissance
    1567: Hans Steininger, the burgomaster of Braunau, Austria, died when he broke his neck by tripping over his own beard.[26] The beard, which was 4.5 feet (1.4 m) long at the time, was usually kept rolled up in a leather pouch.[27]
    1660: Thomas Urquhart, the Scottish aristocrat, polymath and first translator of François Rabelais's writings into English, is said to have died laughing upon hearing that Charles II had taken the throne.[28][29]
    1667: James Betts died from asphyxiation after being sealed in a cupboard by Elizabeth Spencer, at Corpus Christi College, Cambridge in an attempt to hide him from her father, John Spencer.]

    18th century
    1771: Adolf Frederick, King of Sweden, died of digestion problems on 12 February 1771 after having consumed a meal of lobster, caviar, sauerkraut, smoked herring and champagne, topped off with 14 servings of his favourite dessert: semla served in a bowl of hot milk.[33] He is thus remembered by Swedish schoolchildren as "the king who ate himself to death."[34]

    19th century
    1834: David Douglas, Scottish botanist, fell into a pit trap where he was trampled by a wild bull.[35][36]
    1871: Clement Vallandigham, a lawyer and Ohio, U.S., politician defending a man on a charge of murder, accidentally shot himself demonstrating how the victim might have shot himself while in the process of drawing a weapon when standing from a kneeling position. Though the defendant, Thomas McGehan, was ultimately cleared, Vallandigham died from his wound.[

    20th century

    1920s
    1923: George Herbert, 5th Earl of Carnarvon, died after a mosquito bite on his face, which he cut while shaving, became seriously infected with erysipelas, leading to blood poisoning and eventually pneumonia. Some have alleged his death is attributable to the so-called curse of the pharaohs.[
    1926: Phillip McClean, 16, from Queensland, Australia, became the only person documented to have been killed by a cassowary. After encountering the bird on their family property near Mossman in April,[41] McClean and his brother decided to kill it with clubs. When McClean struck the bird, it knocked him down, then kicked him in the neck, opening a 1.25 cm (0.5 in) long cut in one of his main blood vessels. Though the boy managed to get back on his feet and run away, he collapsed a short while later and died from the hemorrhage.[42]
    1926: Harry Houdini, the famous American escape artist, was punched in the stomach by an amateur boxer. Though this had been done with Houdini's permission, complications from this injury may have caused him to die days later, on 31 October 1926. It was later determined that Houdini died of a ruptured appendix,[43] though it is contested as to whether or not the punches actually caused the appendicitis.
    1927: Isadora Duncan, dancer, died of a broken neck when her long scarf caught on the wheel of a car in which she was a passenger.[46]

    1950s
    1958: Gareth Jones, actor, collapsed and died between scenes of a live television play, Underground, at the studios of Associated British Corporation in Manchester, England. Director Ted Kotcheff continued the play to its conclusion, improvising around Jones's absence. Coincidentally, Jones's character was to have a heart-attack, which is what Jones suffered and died of.[47][48]

    1960s
    1961: U.S. Army Specialists John A. Byrnes and Richard Leroy McKinley and Navy Electrician's Mate Richard C. Legg were killed by a water hammer explosion during maintenance on the SL-1 nuclear reactor in Idaho.[49][50][51][52][53]
    1966: Skydiver Nick Piantanida died from the effects of uncontrolled decompression four months after an attempt to break the world record for the highest parachute jump. During his third attempt, his face mask came loose (or he possibly opened it by mistake), causing loss of air pressure and irreversible brain damage.[54][55]

    1970s
    1971: Georgy Dobrovolsky, Vladislav Volkov and Viktor Patsayev, Soviet cosmonauts, died when their Soyuz-11 spacecraft depressurized during preparations for reentry. These are the only known human deaths outside the Earth's atmosphere.[56]
    1974: Basil Brown, a 48-year-old health food advocate from Croydon, England, drank himself to death by consuming 10 gallons of carrot juice in ten days, causing him to overdose on vitamin A and suffer severe liver damage.[57][58]
    1977: Tom Pryce, a Welsh Formula 1 driver, was killed by a fire extinguisher when his car hit a marshall who was running across the Kyalami track to a burning car.[
    1978: Kurt Gödel, the Austrian/American logician and mathematician, died of starvation when his wife was hospitalized. Gödel suffered from extreme paranoia and refused to eat food prepared by anyone else.[63]
    1979: Robert Williams, a worker at a Ford Motor Co. plant, was the first known human to be killed by a robot,[64] after the arm of a one-ton factory robot hit him in the head.[65]
    1979: John Bowen, a 20-year-old from Nashua, New Hampshire, U.S., was attending a New York Jets football game at Shea Stadium on 9 December. During a half-time show event featuring custom-made remote control flying machines, a 40-pound model plane shaped like a lawnmower accidentally dove into the stands, striking Bowen and another spectator, causing severe head injuries. Bowen died in the hospital four days later.[66][67]

    1980s
    1980: 70-year-old mayor Monica Meyers of Betterton, Maryland, U.S., died when she was checking the sewage tanks; she slipped on a catwalk and fell into the 25 foot tank and drowned.
    1981: David Allen Kirwan, a 24-year-old, died from third-degree burns after attempting to rescue a friend's dog from the 200 °F (93 °C) water in Celestine Pool, a hot spring at Yellowstone National Park on 20 July 1981.
    1981: Boris Sagal, a film director, died while shooting the TV miniseries World War III when he walked into the tail rotor blade of a helicopter and was nearly decapitated.[72][73]
    1982: David Grundman was killed near Lake Pleasant, Arizona, U.S., while shooting at cacti with his shotgun. After he fired several shots at a 26 ft (8 m) tall Saguaro Cactus from extremely close range, a 4 ft (1.2 m) limb of the cactus detached and fell on him, crushing him.[73][74][75]

    1990s
    1993: Garry Hoy, a 38-year-old lawyer in Toronto, Canada, fell to his death on 9 July 1993, after he threw himself against a window on the 24th floor of the Toronto-Dominion Centre in an attempt to prove to a group of visitors that the glass was "unbreakable," a demonstration he had done many times before. The glass did not break, but popped out of the window frame.[76][77]
    1997: Karen Wetterhahn, a professor of chemistry at Dartmouth College, died of mercury poisoning ten months after a few drops of dimethylmercury landed on her protective gloves. Although Wetterhahn had been following the required procedures for handling the chemical, it still permeated her gloves and skin within seconds. As a result of her death, regulations were altered.[78][79]
    1999: Jon Desborough, a physical education teacher at Liverpool College, died when he slipped and fell onto the blunt end of a javelin he was retrieving. The javelin passed through his eye socket and into his brain, causing severe brain damage and putting him into a coma. He died a month later.[80][81]

    21st century

    2000s
    2006: An unidentified airline mechanic was sucked into the engine of a Boeing 737-500 at El Paso International Airport while performing routine maintenance on the tarmac.[82][83]
    2007: Jennifer Strange, a 28-year-old woman from Sacramento, California, U.S., died of water intoxication while trying to win a Nintendo Wii console in a KDND 107.9 "The End" radio station's "Hold Your Wee for a Wii" contest, which involved drinking increasingly large quantities of water without urinating.[84][85]
    2007: Humberto Hernandez, a 24-year-old Oakland, California, U.S., resident, was killed after being struck in the face by an airborne fire hydrant while walking. A passing car had struck the fire hydrant and the water pressure shot the hydrant at Hernandez with enough force to kill him.[86][87][88]
    2008: David Phyall, 50, the last resident in a block of flats due to be demolished in Bishopstoke, near Southampton, Hampshire, England, decapitated himself with a chainsaw to highlight the injustice of being forced to move out.[89][90]
    2009: Taylor Mitchell, a Canadian folk singer, was attacked and killed by three coyotes, the only recorded adult person to have been killed by this species.[91][92]

    2010s
    2010: Mike Edwards, British founding member and cellist for the band ELO, died when a large round bale of hay rolled down a hill and smashed his van while he was out driving.[45][93][94]
    2011: Jose Luis Ochoa, 35, died after being stabbed in the leg at an illegal cockfight in Tulare County, California U.S., by one of the birds that had a knife attached to its limb.[95][96]
    2012: Edward Archbold, 32, a man of West Palm Beach, Florida, U.S., died after winning a cockroach eating contest. The cause of death was determined to be accidental choking due to "arthropod body parts."[97][98]
    2013: Takuya Nagaya, 23, from Japan, started to slither on the floor and talk about becoming a snake. His mother took this to mean that he had been possessed by a snake demon and called for her husband, 53-year-old Katsumi Nagaya, who spent the next two days physically beating his son in an attempt to exorcise the demon. This killed Takuya. [99]
    2013: An unnamed Belarusian fisherman, 60, was killed by a beaver while attempting to grab the animal to have his picture taken with it. The beaver bit him, severing a large artery in his leg.
    2013: João Maria de Souza, 45 of Caratinga, Brazil, was killed while asleep, by a cow that fell through the roof of his house onto his bed.
    2013: Kendrick Johnson, 17, was discovered trapped upside down in a rolled up gym mat in his high school gymnasium. Police had originally ruled that the cause of Johnson's death was accidental positional asphyxiation after he climbed in to retrieve a shoe and became trapped. The case has since been reopened and investigated as a possible homicide.
     
  20. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    10 Things You May Not Know About the Vikings
    By Jennie Cohen

    Please Register or Log in to view the hidden image!



    1. Vikings didn’t wear horned helmets.
    Forget almost every Viking warrior costume you’ve ever seen. Sure, the pugnacious Norsemen probably sported headgear, but that whole horn-festooned helmet look? Depictions dating from the Viking age don’t show it, and the only authentic Viking helmet ever discovered is decidedly horn-free. Painters seem to have fabricated the trend during the 19th century, perhaps inspired by descriptions of northern Europeans by ancient Greek and Roman chroniclers. Long before the Vikings’ time, Norse and Germanic priests did indeed wear horned helmets for ceremonial purposes.

    2. Vikings were known for their excellent hygiene.
    Between rowing boats and decapitating enemies, Viking men must have stunk to high Valhalla, right? Quite the opposite. Excavations of Viking sites have turned up tweezers, razors, combs and ear cleaners made from animal bones and antlers. Vikings also bathed at least once a week—much more frequently than other Europeans of their day—and enjoyed dips in natural hot springs.

    3. Vikings used a unique liquid to start fires.
    Clean freaks though they were, the Vikings had no qualms about harnessing the power of one human waste product. They would collect a fungus called touchwood from tree bark and boil it for several days in urine before pounding it into something akin to felt. The sodium nitrate found in urine would allow the material to smolder rather than burn, so Vikings could take fire with them on the go.

    4. Vikings buried their dead in boats.
    There’s no denying Vikings loved their boats—so much that it was a great honor to be interred in one. In the Norse religion, valiant warriors entered festive and glorious realms after death, and it was thought that the vessels that served them well in life would help them reach their final destinations. Distinguished raiders and prominent women were often laid to rest in ships, surrounded by weapons, valuable goods and sometimes even sacrificed slaves.

    5. Vikings were active in the slave trade.
    Many Vikings got rich off human trafficking. They would capture and enslave women and young men while pillaging Anglo-Saxon, Celtic and Slavic settlements. These “thralls,” as they were known, were then sold in giant slave markets across Europe and the Middle East.

    6. Viking women enjoyed some basic rights.
    Viking girls got hitched as young as 12 and had to mind the household while their husbands sailed off on adventures. Still, they had more freedom than other women of their era. As long as they weren’t thralls, Viking women could inherit property, request a divorce and reclaim their dowries if their marriages ended.

    7. Viking men spent most of their time farming.
    This may come as a disappointment, but most Viking men brandished scythes, not swords. True, some were callous pirates who only stepped off their boats to burn villages, but the vast majority peacefully sowed barley, rye and oats—at least for part of the year. They also raised cattle, goats, pigs and sheep on their small farms, which typically yielded just enough food to support a family.

    8. Vikings skied for fun.
    Scandinavians developed primitive skis at least 6,000 years ago, though ancient Russians may have invented them even earlier. By the Viking Age, Norsemen regarded skiing as an efficient way to get around and a popular form of recreation. They even worshipped a god of skiing, Ullr.

    9. Viking gentlemen preferred being blond.
    To conform to their culture’s beauty ideals, brunette Vikings—usually men—would use a strong soap with a high lye content to bleach their hair. In some regions, beards were lightened as well. It’s likely these treatments also helped Vikings with a problem far more prickly and rampant than mousy manes: head lice.

    10. Vikings were never part of a unified group.
    Vikings didn’t recognize fellow Vikings. In fact, they probably didn’t even call themselves Vikings: The term simply referred to all Scandinavians who took part in overseas expeditions. During the Viking Age, the land that now makes up Denmark, Norway and Sweden was a patchwork of chieftain-led tribes that often fought against each other—when they weren’t busy wreaking havoc on foreign shores, that is.
     
  21. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    6 Mysterious Disappearances in U.S. History
    By Jennie Cohen


    1. Jimmy Hoffa

    Please Register or Log in to view the hidden image!



    On July 30, 1975, James Riddle Hoffa, one of the most influential American labor leaders of the 20th century, disappeared in Detroit, Michigan, never to be heard from again. Born in 1913 to a poor coal miner in Indiana, the charismatic Hoffa proved a natural leader from a very young age. While working for a Detroit grocery chain he organized a labor strike that got him noticed by the powerful Teamsters union. Hoffa rose through the organization’s ranks over the next few decades and in 1957 took over its presidency. A savvy political playmaker and tireless advocate for the downtrodden, he became wildly popular within the Teamsters and beyond.

    And yet, for all the battles he fought and won on behalf of American workers, Hoffa also had a dark side. During Hoffa’s tenure, Teamster leaders partnered with the Mafia in racketeering, extortion and embezzlement. Hoffa himself had relationships with high-ranking mobsters and was the target of several government investigations throughout the 1960s. Convicted first of obstruction of justice and later of attempted bribery, Hoffa began a 13-year prison sentence in March 1957. President Richard Nixon commuted the sentence in 1971, and Hoffa quickly began making a comeback within the Teamster leadership and penning his autobiography. These plans screeched to a halt, however, on July 30, 1975, when Hoffa was last seen in the parking lot of a Detroit restaurant, not far from where he got his start as a labor organizer. Though many have speculated that he was the victim of a Mafia hit, conclusive evidence was never found, and Hoffa’s fate remains shrouded in mystery to this day. He was declared legally dead in 1982.


    2. Amelia Earhart

    Please Register or Log in to view the hidden image!



    Amelia Earhart’s daring round-the-world-flight was cut short when her Lockheed Electra disappeared over the Pacific Ocean on June 2, 1937. Within hours, rescue workers began scouring the area for signs of the famed aviator and her navigator, Fred Noonan. A living legend had vanished into thin air. In an official report, the U.S. government concluded that the two seasoned flyers, unable to locate their destination of Howland Island, ran out of fuel, crashed into the water and sank. Earhart was declared legally dead on January 5, 1939.

    The question of why and where her plane went down, however, has never been put to rest. Indeed, in the seven decades since the Electra’s disappearance, a number of hypotheses have emerged. Some theorists, for instance, believe Earhart was actually a secret agent working for the U.S. government. They suggest that the plane crashed after its pilots intentionally deviated from their course to spy on Japanese-occupied islands in the Pacific, or that Earhart and Noonan landed on one of them and were taken prisoner. Yet another theory holds that Earhart returned safely to the United States, changed her name and lived a long life in obscurity. Another widely held belief is that Earhart and Noonan touched down on a remote South Pacific island called Nikumaroro and died there some time later.


    3. The Mary Celeste

    Please Register or Log in to view the hidden image!



    On a wintry November morning in 1872, Captain Benjamin Briggs, his wife Sarah, their 2-year-old daughter Sophia and a crew of seven set sail from New York Harbor on the Canadian-built brigantine Mary Celeste, bound for Genoa, Italy. Their journey quickly turned into one of history’s most chilling maritime mysteries. On December 4, some 600 miles west of Portugal, the helmsman of the merchant ship Dei Gratia spotted an odd sight through his spyglasses: a vessel with slightly torn sails that seemed to be careening out of control. The Dei Gratia’s captain, David Reed Morehouse, immediately identified the ship as the Mary Celeste; in a strange twist, he and Benjamin Briggs were old friends, and had dined together shortly before their respective departures from New York.

    When a crew from the Dei Gratia boarded the Mary Celeste, almost everything was present and accounted for, from the cargo in the hold to the sewing machine in the captain’s cabin. Missing, however, were the ship’s only lifeboat—and all of its passengers. What happened to the Briggs family and the Mary Celeste’s crew members? Some have suggested that pirates kidnapped them, while others have speculated that a sudden waterspout washed them away. Over the years, the search for a true answer to the Mary Celeste puzzle has come to center on the ship’s cargo: barrels of industrial alcohol intended for fortifying Italian wines Industrial. Alcohol can emit highly potent fumes, which might have led the crew to fear an explosion and temporarily evacuate into the lifeboat. At that point, a gale could have swept the ship away, leaving its former passengers stranded and cementing the Mary Celeste’s reputation as the archetypal ghost ship.


    4. The Lost Colony

    Please Register or Log in to view the hidden image!



    In July 1587, roughly 115 English men, women and children landed on Roanoke Island, located off the coast of North Carolina in what is now Dare County. Less than a month after their arrival, the settlers welcomed the arrival of Virginia Dare, the first English baby born in the Americas. As tensions mounted between the colonists and local tribes, the fledgling town’s governor, John White, who was also Virginia’s grandfather, set sail for England to seek out help and supplies. When he returned three years later, the settlement was completely deserted and all of its inhabitants had vanished. The only clue they had left behind was a single word carved into a wooden post: “Croatan,” the name of a local—and friendly¬—Native American tribe.

    This cryptic message has led some scholars to believe that the Croatans killed or kidnapped the colonists. Others have suggested that the settlers assimilated and intermarried with the Croatans or other Native Americans and moved farther inland. Another theory holds that Spanish troops wiped out the settlement, as they had done to the French colony of Fort Caroline earlier in the century. Until more concrete evidence emerges, historians will be left to speculate on the fate of Virginia Dare and the other members of America’s “Lost Colony.”


    5. D.B. Cooper

    Please Register or Log in to view the hidden image!



    On November 24, 1971, a man wearing a black raincoat, a dark suit and wraparound sunglasses took his seat on Northwest Orient Flight 305, scheduled to take off in Portland, Oregon, and arrive in Seattle, Washington. After takeoff, he handed a note to a flight attendant, who assumed he was hitting on her and placed it in her purse. He then told her he had a bomb in his briefcase and demanded $200,000, four parachutes and “no funny stuff.” The passenger identified himself as Dan Cooper, but thanks to a reporting error as the story was breaking he was forever immortalized as “D.B.” Cooper.

    The plane landed at Seattle-Tacoma International Airport, where authorities handed over the items and evacuated most of the passengers. Cooper then instructed the pilot to fly toward Mexico City at a low altitude and ordered the remaining crew into the cockpit. A short time later, he jumped out of the plane and into a raging thunderstorm. He was never seen or heard from again. Since his disappearance, the FBI has investigated and subsequently ruled out more than a thousand suspects; they agency now believes it is likely Cooper died in the fall. While his body has never been recovered, in 1980 an 8-year-old boy found a stack of nearly $5,880 of the ransom money in the sands along the north bank of the Columbia River, five miles from Vancouver, Washington.


    6. Joseph Force Crater

    Please Register or Log in to view the hidden image!



    The disappearance of New York Supreme Court judge Joseph Force Crater captured so much media attention that the phrase “pulling a Crater” briefly entered the public vernacular as a synonym for going AWOL. On August 6, 1930, the dapper 41-year-old left his office and dined with an acquaintance at a Manhattan chophouse. He was last seen walking down the street outside the restaurant. The massive investigation into his disappearance captivated the nation, earning Crater the title of “the missingest man in New York.” Crater was infamous for his shady dealings with the corrupt Tammany Hall political machine and frequent dalliances with showgirls. In the days leading up to his disappearance, he had reportedly received a mysterious phone call and cashed two large personal checks. These details spawned rampant speculation that the judge had been a victim of foul play. He was declared legally dead in 1939.

    In 2005, New York police revealed that new evidence had emerged in the case of the city’s missingest man. A woman who had died earlier that year had left a handwritten note in which she claimed that her husband and several other men, including a police officer, had murdered Crater and buried his body beneath a section of the Coney Island boardwalk. That site had been excavated during the construction of the New York Aquarium in the 1950s, long before technology existed to detect and identify human remains. As a result, the question of whether or not Judge Crater sleeps with the fishes remains a mystery.
     
  22. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322
    Before Rebranding, The US Dept. of Defense Was Called The “Department of War”

    Some would say that the old title was more descriptive or honest. The US Department of Defense, which is commonly known as the DoD for short, actually used to be titled more bluntly, “Department of War.”

    The name change occurred in the late 1940s. With World War II over, the United Nations was taking steps towards what it hoped would be a lasting peace. In its Charter, the UN outlawed wars of aggression (wars which aren’t fought in defense), and as a result, top US military brass felt the American bureau needed a name, if only for PR reasons.

    Above: The official Dept of War seal So, from 1947 through 1949, Congress adopted a series of laws renaming (and reorganizing) the American national military establishment to a more politically correct naming scheme. Accordingly, the Secretary of War was renamed the Secretary of Defense. Perhaps only one vestige of the old naming scheme remains: the US Army War College in Pennsylvania.

    Following suit, several other countries also renamed their war departments around the same time. For example, Great Britain similarly used to have a War Office, which was renamed to the Ministry of Defence in 1963.


    The United States Used to Have More Laws Concerning Margarine than Hate Crimes

    The margarine and dairy industries have been at war since the late 1800s when the lower-cost butter substitute was first invented.

    Early on, butter manufacturers believed that if margarine ever became popular it might cut into their sales significantly. And they were right. In order to fight the new product, the dairy industry set out to restricts availability and undercut its appeal in both the courtroom and Capitol Hill.

    Dairy industry lobbyists worked both in individual states and also on a federal basis to fight the emerging margarine industry. Congress passed the Margarine Act of 1886, which put a two cent tax on the product, as well as restricted its sale and manufacture.

    Please Register or Log in to view the hidden image!



    In the following years, the pro-butter side managed to pass laws in 32 states either prohibiting margarine manufacturers from coloring their product yellow like butter, or actually requiring them to color it pink, which they hoped would make it less appealing to consumers. Margarine manufacturers began to sell do-it-yourself yellow coloring kits for consumers to buy and use at home to make their butter substitute more palatable as a reaction. Meanwhile, additional federal laws were also passed, the last of which was repealed in 1996.

    By contrast, both federal and statewide hate crimes laws are a relatively new invention, with the first federal hate crimes legislation being enacted in the 1990s. To this day, the National Association of Margarine Manufacturers website still refers to the dairy industry as “dairy militants.”


    Before Becoming President, Ronald Reagan Was a Paid Cigarette Model

    Long before Ronald Reagan was the governor of California or the 40th president of the United States, he made money posing in cigarette advertisements.

    Although his modeling relationship with the tobacco industry dates to at least the 1930s when he was a radio sportscaster, business picked up after he became a well known Hollywood actor, and he continued to model for them.

    The amount of money he earned by making tobacco “cool” is unknown.

    Please Register or Log in to view the hidden image!



    What is known is that Reagan mostly did his posing for The Liggett & Myers Tobacco Company, which was later through a series of lawsuits revealed to be covering up the true health effects of smoking through their deceptive advertisements.

    Ironically, though he was paid to make smoking cool, Reagan did not smoke himself.

    Please Register or Log in to view the hidden image!




    Movie Trailers Weren’t Always Shown Before Films

    Movie trailers have been around since at least 1912, but they didn’t always run before the movies they’re attached to.

    In fact, movie trailers (as you might guess by their name, trailers) used to trail behind films in theaters, not before them. Nowadays, the thought of showing advertisements for upcoming films after movies instead of before them makes little sense, because with the main attraction over with, why would the audience stick around to watch commercials?

    The answer is that often times in the early days of movies, the main attraction wasn’t over when the movie finished. It was common in the early decades of movies to show them in double features– i.e. one movie right after another. In the 1920s, 1930s, 1940s and so on, trailers would often play after the first movie and before the second (which was often the blockbuster movie that people had come to actually see; the first was often a B-movie).

    Starting in the 1920s and 1930s, theatrical trailers were often supplemented with newsreels, public service announcements, and short animated films, in a precursor to the diversity of modern day television programming.

    Early on, trailers were cobbled together by individual theaters hoping to promote upcoming films, but soon the studios got into the act, eventually sub-contracting out the task of creating trailers to outside companies (a practice which is still largely followed today).
     
  23. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,322

    Please Register or Log in to view the hidden image!



    If you are neutral in situations of injustice, you have chosen the side of the oppressor. If an elephant has its foot on the tail of a mouse and you say that you are neutral, the mouse will not appreciate your neutrality.

    Desmond Tutu


    History is the version of past events that people have decided to agree upon.

    Napoleon Bonaparte


    Few will have the greatness to bend history itself; but each of us can work to change a small portion of events, and in the total; of all those acts will be written the history of this generation.

    Robert Kennedy


    The world we see that seems so insane is the result of a belief system that is not working. To perceive the world differently, we must be willing to change our belief system, let the past slip away, expand our sense of now, and dissolve the fear in our minds.

    William James


    The very ink with which history is written is merely fluid prejudice.

    Mark Twain


    That men do not learn very much from the lessons of history is the most important of all the lessons of history.

    Aldous Huxley


    The test of our progress is not whether we add more to the abundance of those who have much it is whether we provide enough for those who have little.

    Franklin D. Roosevelt


    Posterity! You will never know how much it cost the present generation to preserve your freedom! I hope you will make a good use of it.

    John Adams


    If one morning I walked on top of the water across the Potomac River, the headline that afternoon would read: 'President Can't Swim.'

    Lyndon B. Johnson


    Most of the things worth doing in the world had been declared impossible before they were done.

    Louis D. Brandeis


    History is the sum total of things that could have been avoided.

    Konrad Adenauer


    The object of my relationship with Vietnam has been to heal the wounds that exist, particularly among our veterans, and to move forward with a positive relationship,... Apparently some in the Vietnamese government don't want to do that and that's their decision.

    Ho Chi Minh


    God cannot alter the past, though historians can.

    Samuel Butler


    Human history becomes more and more a race between education and catastrophe.

    H. G. Wells


    The past actually happened but history is only what someone wrote down.

    A. Whitney Brown


    Legend: A lie that has attained the dignity of age.

    H. L. Mencken


    History is a pack of lies about events that never happened told by people who weren't there.

    George Santayana


    History is a vast early warning system.

    Norman Cousins


    Please Register or Log in to view the hidden image!

     

Share This Page