You are currently browsing the tag archive for the ‘first’ tag.
Last spring my flower garden was sad. I planted a ton of daffodils, crocuses, tulips, and irises, but, thanks to squirrel depredations, I ended up with one mangled tulip of indefinite color (which was ripped apart by a squirrel the day after it bloomed). The squirrels in my part of Brooklyn are angry hungry monsters. Rap music and powerful Jamaican curries have desensitized them to noises and smells which would scare off lesser squirrels. No one traps or shoots them–so they do not fear the fell hand of man.
This year I have been desperately trying to keep my bulbs alive long enough to bloom properly. Every evening since mid-March you can see me out back throwing hot pepper and garlic powder on the garden like some maddened chef. I have spritzed an ocean of animal repellent on the little green buds. I have studded the garden with glittering mylar pinwheels and festooned it with scary helium balloons. Yet every day another bud is taken. The crocuses were all ripped up. In the end, I wonder if anything will actually blossom, or if it was all once again in vain.
However there is one exception to this story of attrition and doom! Yesterday the first flower bloomed in my back yard…and it was not at all what I was expecting. Primulaceae, the primroses are native to Europe from Norway south to Portugal and from the Atlantic coast east all the way to Asia Minor. Perhaps I should not be surprised that the primrose is first to bloom considering it lives wild in Norway, the land of polar bears, glaciers, and marauders. Most garden primroses have been heavily hybridized, but last year I bought a specimen which looked most like the common European primrose, Primula vulgaris, and it survived a whole year to bloom again! The flower has five beautiful butter yellow petals with center around a bright yellow eye.
I was hoping to provide some exciting primrose lore, but the humble flower does not seem to feature in many myths and legends. According to Wikipedia, it was Benjamin Disraeli’s favorite flower, so crafty parliamentarians should at least be drawn to this article. Anyway, spring is finally here so prepare for everything to get better.
The vernal equinox will be here in a few days. This welcome news is hard to believe because the temperatures in Brooklyn are still dipping into the twenties at night. However the first bulbs are beginning to crop up in the garden (although the insatiable squirrels nip them down as quickly as they appear). A few bulbs have already flowered: one of the earliest of spring flowers, the Galanthus (or snowdrop) has one of the most fragile and delicate appearances of any garden plant. The translucent white hanging flowers resemble dainty tropical moths and grow from tender green shoots.
There are 20 species of snowdrops—all of which are hardy perennial herbaceous plants. The pendulous white & green flower of a snowdrop has no petals but consists of 6 large tepals (3 of which are larger than the others). Snowdrops naturalize well in Northern deciduous forests. Because they bloom so early they have the entire woodland to themselves and they form magnificent white drifts almost reminiscent of famous bluebell woods.
Numerous poets, writers, and artists have alluded to the snowdrop as a symbol of hope and a metaphor for the passions of spring. For example Hans Christian Anderson wrote an uplifting story for children about a snowdrop desperately aspiring to the light then blooming only to be picked and pressed in a book of poetry. [Ed. As an aside, does anyone remember why Hans Christian Anderson was such a beloved children’s author?]
Snowdrops are not just a lovely harbinger of spring, they also have a tiny place in one of the great unfolding fights about bioengineering. Snowdrops contain various active compounds useful for medicine or with insecticidal properties. In 1998 a Hungarian scientist, Arpad Pusztai, publically spoke about rodent studies conducted on potatoes which had been transgenically altered to express snowdrop lectins (for insecticidal purposes). Dr. Pusztai asserted that the modified potatoes were causing damage to the intestinal epiphelial cells of the rats (and imputed broader health dangers to the modified tubers). The subsequent scandal impacted science, media, politics, business, and culture. The scientific community came to the conclusion that Pusztai’s research was flawed (while anti-GMO community flocked to his support and rallied around his work as an example of how GMOs could potentially be dangerous).
The first animal to be domesticated was the wolf (modern humans call domesticated wolves “dogs”). This happened thousands (or tens of thousands) of years before any other plants or animals were domesticated. In fact some social scientists have speculated that the dogs actually domesticated humans. Whatever the case, our dual partnership changed both species immensely. It was the first and most important of many changes which swept humanity away from a hunter-gatherer lifestyle and into the agricultural world.
Today’s post isn’t really about the actual prehistory behind the agricultural revolution though. Instead we are looking at an ancient Chinese myth about how humans changed from hunters into farmers. Appropriately, even in the myth it was dogs who brought about the change. There are two versions of the story. In the version told by the Miao people of southern China, the dog once had nine tails. Seeing the famine which regularly afflicted people (because of seasonal hunting fluctuations) a loyal dog ran into heaven to solve the problem. The celestial guardians shot off eight of the dog’s tails, but the brave mutt managed to roll in the granaries of heaven and return with precious rice and wheat seeds caught in his fur. Ever since, in memory of their heroism, dogs have one bushy tail (like a ripe head of wheat) and they are fed first when people are done eating.
A second version of the tale is less heroic, but also revolves around actual canine behavior. In the golden age, after Nüwa created humans, grain was so plentiful that people wasted it shamefully and squandered the bounty of the Earth. In anger, the Jade Emperor came down to Earth to repossess all grains and crops. After the chief heavenly god had gathered all of the world’s cereals, the dog ran up to him and clung piteously to his leg whining and begging. The creature’s crying moved the god to leave a few grains of each plant stuck to the animal’s fur. These grains became the basis of all subsequent agriculture.
Even in folklore, we owe our agrarian civilization to the dogs, our first and best friends.
This blog has pursued all things gothic, as the open-ended concept has wound its way through history, the arts, literature, and other forms of culture. There is, however, a major creative genre which we have entirely overlooked—that of cinema. The melodramatic spookiness of the 19th century Gothic revival movement was born in architecture and literature, but it was the media of film which cemented the whole concept of horror as a fundamental distinctive genre. In the modern world, gothic horror (with all of its familiar trappings) is virtually synonymous with film. This characteristic milieu of ruffled clothing, vampires, ghosts, sconces, and eerie castles goes all the way back to the first horror film–which was made very early indeed, in France in 1896.
Le Manoir du Diable (“The Manor of the Devil”) was meant as a pantomime farce, but most of the familiar elements of gothic cinema appear in the three minute production. It was released on Christmas Eve of 1896 at the Theatre Robert Houdin (which was on the Boulevard des Italiens in Paris). Since the piece is well over a century old, any copyright has long expired and it is part of the public domain. So, without further ado, here it is:
Using the most sophisticated special effects of the day, the filmmakers present a sorcerous devil popping in and out of reality. The fiend creates goblins, bats, and specters out of thin air and thereby bedevils a pair of foppish noblemen who have wandered (or been summoned?) into the haunted castle. Fortunately, one of the noblemen has the presence of mind to seize a handy crucifix and banish the fiend.
Although the film’s staging—and overarching moral lesson–owe something to opera, the rapid protean transfigurations were a completely novel feature. Admittedly the special effects have not aged well, but I think you will enjoy Le Manoir du Diable, the first gothic film.
This week’s posts [concerning translucent sea slugs, wasps named for a crazy pop star, an elusive Indochinese cousin of the cow, and whole sunless ecoystems] have all been about finding new life-forms. There is, of course, only one place such a topic can ultimately wind up—far beyond the living jungles, azure seas, and swirling clouds of our beautiful home planet, out in the immensity of space where the greatest question of all waits like a magic golden apple spinning in darkness.
Is there life elsewhere?
Unfortunately the current answer is incomplete: all known life–in all of its ineffable variety–is Earth-based…yet the universe is vast beyond comprehension. So I’m going to mark this down as “probably.”
Many ancient societies reckoned that other worlds existed. The Norse had their nine worlds joined together by the great ash tree Yggrdasil. The Chinese had myths about Chang’e and the Jade rabbit on the moon. Even the stolid Christians believe in heaven & hell, which are places filled with intelligent beings that are not on earth (ergo, alien realms somewhere out there in the multiverse). William Herschel, great astronomer of the Enlightenment, believed that life was everwhere—particularly everywhere in the solar system.
When humankind entered space age, we used our burgeoning technology to examine the solar system for signs of Sir William’s spacefolk. Although we did not find the Venusian space hotties we were looking for (dammit), we did discover that among our neighboring planets, there are several other possible homes for earthlike living things. The cloud tops of Venus are inviting and could host bacteria-like life (although I hope not, since I want us to build a second home there). For centuries, scientists and fabulists speculated about life of Mars. We now know that the Martian magnetosphere died and the planet’s atmosphere was swept away, but perhaps there are some hardy extremophile bacteria living in the Martian rocks somewhere. It’s a sad scenario to imagine them on their dying world—like little kids left in a bathtub going cold. Certain moons of Jupiter & Saturn seem to be the real best bet for life in the solar system. The Jovian moons Europa, Ganymede, and Callisto are all believed to have extensive liquid oceans beneath their crust. Likewise the Saturn moons Titan and Enceladus are believed to have subsurface water. The discovery of life on Earth which did not directly require photosynthesis (like the cold seeps from yesterday’s post) has given scientists hope that bacterial mats—or maybe something even more advanced–exists on one of these moons.
So maybe there are some bacteria analogs or conodont-like creatures squiggling around in some cranny of the solar system. Perhaps life takes on an unknown form and we already flew over a clever, good-hearted ammonia-based life form on Enceladus (which NASA analysts then promptly dismissed as a snowbank), but I doubt it. The true answers to the questions about life lie out there among the stars. Exoplanets are being discovered at a tremendous rate and everyone hopes that some of the more earthlike examples harbor life. Unfortunately our technology is nowhere close to being able to spot the planets themselves and gauge whether life is there by means of spectrograph. We are stuck waiting for peers who are either broadcasting radio signals or screwing around with the fundamental nature of existence in such a way that would bring them to our attention. Indeed as humankind’s technological savvy grows, scientists are looking for more sophisticated signs of advanced life such as black holes of less than 3.5 solar masses or sophisticated particle radiation which could only be created (or detected) by civilizations of huge sophistication. All we can say right now is that, after a hundred years of looking, we have not found a lot of radio chatter in our neck of the galaxy—which is an answer of sorts itself.
Perhaps we are among the first sentient beings in this area of space (or anywhere, for that matter). The first generation of stars had to live and die before there were any raw materials for chemically based life. It took billions of years to get where we are, and, despite a few perilous missteps and accidents, life on Earth has been lucky. In my opinion some of those planets we are discovering are almost certainly covered with microbial life, but not many have little green scientists in many-armed lab coats firing up their radio telescopes (or forging little suits of chain mail a few hundred years behind us).
In writing about the Curiosity rover, I humorously mentioned how much it looked like the aliens from golden age science fiction. It seems we are also broadcasting retro style messages to the stars. Above is the print-out version of the Arecibo message—one of the loudest broadcasts we have sent. It’s like a macramé knitted by Dr. Zoidberg’s great aunt or a valentine from Atari’s space invaders! Imagine if you pointed your radio telescope at the heavens and received a message like that! Maybe the aliens are scared of us or maybe they don’t want to talk to a species with such homespun tastes!
So, after the whole post we are no closer to knowing if there is life in the cosmos, but what did you expect? Did you think I would tell you some secret here before you saw it blaring out of every news station on the planet? [If you did think that, then thank you so much!] I believe that extraterrestrial life is out there. I even believe that intelligent extraterrestrials are out there, but the universe really is ridiculously, ridiculously vast. It’s going to take a while to find our fellow living beings. In the mean time have faith (which is not advice I thought I would be giving) and keep looking up at the cold distant heavens.
Ferrebeekeeper has looked at true space pioneers such as Konstantin Tsiolkovsky, the father of theoretical astronautics, and Yuri Gagarin, the first person to actually visit space. However there are apocryphal tales concerning earlier space-explorers and aerospace pioneers. One of the shortest and silliest legends concerns Wan Hu, a (most-likely-fictional) petty government official in Medieval China who was reputedly the first astronaut. Wan Hu’s problematic story is set in 15th century China during the 19th year of the reign of the Chenghua Emperor (the 8th emperor of the Ming Dynasty who ruled from 1464 to 1487).
Wan Hu was obsessed with the heavens and he decided to travel to outer space by means of Ming dynasty technology (or possibly he was trying to catch a cartoon roadrunner). He assembled a “spacecraft/flying machine” from 47 powerful fireworks rockets, two large kites, and an armchair. The rockets were tiered into two stages to give the chair an added burst of power. When he gave the word, numerous attendants darted forward with torches and simultaneously lit the fireworks. Wan’s chair leapt into the sky and then exploded in a giant ball of flame. When the smoke cleared Wan and his flying machine were entirely gone. Even if he did not make it to space, he certainly succeeded in exiting this mortal plane!
Not only does the story contain implausible elements (!) but the first known version of Wan Hu’s heroic but doomed flight comes from an issue of Scientific American published in the October 2nd, 1909 issue of Scientific American (although the hero of that story was named “Wang Tu”). Subsequent retellings of the story (first in English and then in Chinese) changed the name to Wan Hoo and then finally to Wan Hu. Despite Wan’s nonexistent nature, the Soviets named a lunar crater after him in 1966. In 1970 the International Astronomical Union accepted the name—so an ancient crater which measures 5km deep and 52km wide on the dark side of the moon permanently bears the name of the imaginary adventurer. Surprisingly the Chinese have embraced Wan (choosing hopefully to admire his courage and foresight rather than his safety protocols) and there is a statue of him at Xichang Satellite Launch Centre.
Living during the communications revolution, it sometimes seems impossible to imagine how quickly the world has changed. Today is the 50th anniversary of an important step towards the instantly connected world of today: on July 10, 1962, a Thor-Delta rocket (launched from Cape Canaveral) carried the communication satellite Telstar 1 into orbit. The satellite was built by collaboration between AT&T, Bell Labs, NASA, the British General Post Office, and the French National Post, Telegraph, and Telecom Office. It was the first satellite ever to relay television, telephone and high-speed data communications. It was the first time that humans could beam such complicated information across an entire ocean via electromagnetic transmission.
Telstar was tiny and crude by today’s standards. The entire spacecraft weighed only 77 kg (170 lbs). The power generated by its solar panels was a mighty 14 watts (which is about what is necessary to operate a dim fluorescent nightlight). Since Telstar 1 was in non-geosynchronous orbit, its ability to transmit transatlantic signals was limited to a 20 minutes window during each 2.5 hour world orbit (and because satellite broadcasting stations only existed in England, France, and on the East Coast, the rest of the world didn’t matter) . Most contemporary telecommunications satellites are in geosynchronous orbit (and stay in place despite the solar wind thanks to thruster burns), but Telstar came in an era before all of that. The satellite’s first broadcast (on July 23rd) consisted of President John F. Kennedy talking about the dollar’s rapidly appreciating value. The initial broadcast also showed a baseball game, the American flag, Mount Rushmore, and, of course French singer Yves Montand.
Telstar 1 had a brief and memorable life broadcasting one grainy channel of black and white television and relaying perhaps a few hundred phone lines, but it has not been broadcasting since 1963. High altitude nuclear tests carried out during 1962 supercharged the Van Allen belt and overwhelmed the fragile electronics on the craft. As of May 2012 Telstar was still in orbit around Earth—presumably it is still up there, circling our planet, simultaneously a communications milestone and a cold war victim.
In prehistoric times there was no sugar. Sweetness was only to be found in fruits and berries–with one gleaming exception. Pre-agricultural humans were obsessed with hunting honey (in fact there are rock paintings from 15,000 years ago showing humans robbing honey from wild bees). The golden food made by bees from pollen and nectar of flowers was not merely delectable: honey is antiseptic and was used as a medicine or preservative. The wax was also valued for numerous artistic, magical, medicinal, sealing, and manufacturing purposes.
But wild bees were hard to find and capable of protecting themselves with their fearsome stinging abilities. One of the most useful early forms of agriculture was therefore beekeeping. The first records we have of domesticated bees come from ancient Egypt. An illustration on the walls of the sun temple of Nyuserre Ini (from the 5th Dynasty, circa 2422 BC) shows beekeepers blowing smoke into hives in order to remove the honeycomb. The first written record of beekeeping—an official list of apiarists–is nearly as old and dates back to 2400 BC. Cylinders filled with honey were found among the grave goods discovered in royal tombs.
Honey was treasured in the (sugar-free) world of ancient Egypt. It was given as a fancy gift and used as an ointment for wounds. Although honey was too expensive for the lowest orders of society to afford, ancient texts have come down to us concerning thieving servants “seduced by sweetness.” Wax was also precious. Wax tablets were used for writing. Wax was an ingredient in cosmetics, an adhesive, a medicine, and a waterproofing agent. Wigs were shaped with wax. It served as the binding agent for paints. Mummification required wax for all sorts of unpleasant mortuary functions. Perhaps most seriously (to the ancient Egyptian mind at least) wax was necessary for magic casting. By crafting a replica of a person, place, or thing, Egyptians believed they could affect the real world version.
According to Egyptian mythology, bees were created when the golden tears of Ra, the sun god, fell to earth. Bees are even a part of the foundation of the Egyptian state—one of the pharaoh’s titles was “king bee” (although Egyptians might have grasped rudimentary beekeeping skills they missed many of the important nuances of hive life and they thought the queen was a king). The symbol of fertile Lower Egypt was the honey bee and the Deshret–the Red Crown of Lower Egypt is believed to be a stylized representation of a bee’s sting and its proboscis.
In ancient Egypt the sky was a gleaming blue, the sacred lotuses had blue petals, the pharaoh’s battle crown was blue, beautiful women wore chokers made of blue stone, and, above all, the life-giving Nile was blue. The ancient Egyptians needed azure pigment to portray these essential elements of life within their sacred art, but the only natural blue pigments were from turquoise and lapis lazuli—semi-precious stones which were rare and expensive. To provide a sufficient supply of blue pigment for painting, jewelry, and sculpture, the Egyptians therefore invented the first synthetic pigment which today is appropriately known as “Egyptian blue” (well, it is also appropriately known as calcium copper silicate–CaCuSi4O10 or CaO·CuO·4SiO2—but I’m going to keep calling it Egyptian blue).
Egyptian blue was synthesized in the 4th Dynasty (c.2575-2467 BC) when the newly created pigment was first used to color limestone sculptures, beads, and cylinder seals. Its use became more prevalent in the Middle Kingdom, and then increased again during the New Kingdom when blue was used for the production of numerous everyday objects. Throughout the Hellenic and Roman age, Egyptian blue was a mainstay of the nascent chemical industry, and it found its way into all sorts of art, jewelry, crafts, and artisan wares. Then, in the fourth century the secret of its manufacture was lost. Only in the beginning of the nineteenth century did interest revive as the English and French pioneers of the chemical trade rushed to synthesize useful compounds. As one might surmise from the fact that the manufacturing process was lost for a millennium and a half, the method to make Egyptian blue is surprisingly involved. Citing a British Museum publication, Wikipedia describes it thus:
Several experiments have been carried out by scientists and archaeologists interested in analyzing the composition of Egyptian blue and the techniques used to manufacture it. It is now generally regarded as a multi-phase material that was produced by heating together quartz sand, a copper compound, calcium carbonate, and a small amount of an alkali (plantash or natron) at temperatures ranging between 800–1000 °C (depending on the amount of alkali used) for several hours. The result is cuprorivaite or Egyptian blue, carbon dioxide and water vapor…
The Egyptians were clearly people who took their pigments seriously, and thankfully so–the blue tints they crafted have lasted for thousands of years (and helped us find our way to synthesized pigments). It is strange to think of the subtle ways that the Nile still flows through our lives.
Spring has come early this year and the beautiful tulip-like petals of New York City’s magnolia trees are already beginning to fall into great drifts of white and pink. If you stop and pick up one of the pretty petals from such a pile you will be surprised by the leathery resilience of the delicate-looking petals. The durability of the petals of magnolia flowers is not coincidental—the flowers are different from other common flowering trees because Magnoliidae trees were among the first flowering trees to evolve. The earliest known fossils of such flowers date from the upper Cretaceous period around 130 million years ago. Magnoliidae petals are tough because they were originally meant to attract the attention of beetles rather than bees (which do not appear in the fossil record until 100 million years ago). Since there were no insects specially adapted to live as pollinators when magnolia-like trees first appeared, the petals and reproductive structures of these first flowering trees had to be robust to survive attention from the hungry clumsy beetles (toughness which has passed on to the modern ornamental trees).
Paeleobotanists have not yet unraveled the entire history of the evolution of flowering plants (indeed, Charles Darwin called the abrupt appearance of flowers in the fossil record “the abominable mystery”) however magnolia-like trees appeared long before the great radiation of angiosperms which occurred approximately 100 million years ago. The first magnoliid trees must have seemed tremendously strange–explosions of color and shape surrounded by great uniformly green forests of gymnosperm trees (like the familiar conifers). Magnolia blossoms betray evidence of their ancient lineage through several “primitive” features: the petals are nearly indistinguishable from the sepals; each flower has many stamens which are arranged in spiral rows; there are multiple pistils; and all of the stamens and pistils are supported by a “fingerlike receptacle.”
By attracting the attention of animals (either through the colorful appearance and appealing scent of flowers, or by the edible nectar and fruit) flowering plants were better able to reproduce themselves. Magnolias spread around the temperate world and began the complicated interdependent relationship which all sorts of animals (including humans) have with flowering plants.