« June 2008 | Main | August 2008 »

July 31, 2008

Solar Cycle 24

The Earth's surface is heated primarily by the sun. Since we're sitting on a hot ball of iron and nickel, some heating does come from the interior of the Earth. That notwithstanding, a variation in insolation will change the average temperature at the surface of the Earth. We still don't know whether the sun's energy output is constant, although we do have a number known as the Solar constant. The Solar constant, which is the integrated amount of solar energy incident on the Earth at all wavelengths, is about 1366 watts per square meter. This varies about 6.0% over the course of the year, since the Earth's orbit is not circular. When the Earth is nearest the sun, the solar constant is about 1412 W/m2; when it's farthest, it's about 1321 W/m2.

The sun is not a featureless ball of light. Sun spots, cooler regions on the sun's surface that arise when convection is inhibited by magnetic action, were observed telescopically by several astronomers in 1610 and explained by Galileo shortly thereafter. The interesting thing about sunspots is that their occurrence follows a cyclic pattern with a period of about eleven years. Every eleven years there is a period when no sun spots appear, followed about five or six years later by a solar surface marked with about 100-200 sunspots. We are just leaving a sun spot minimum and we're at the start of what's called Solar Cycle 24, which began with an observation of a sun spot of a particular magnetic polarity on January 4, 2008 [1-3]. Solar Cycle 24 is the twenty-fourth cycle since accurate records have been taken during the first cycle of March, 1755, through June, 1766.

One curious thing about sun spots is their near absence in the period 1645 - 1715. During this period, known as the Maunder Minimum after its discoverer, Edward Maunder, only fifty sunspots appeared in toto, as compared to the tens of thousands typically observed in a similar period. In fact, no sun spots were observed at all during 1670. The Maunder Minimum coincided with the Little Ice Age, a period of extremely cold winters. This coincidence may just be anecdotal evidence for an intimate connection between solar activity and Earth's climate, but if it's a real effect, our climate is affected by more than man-made global warming.

Solar cycle 23, which peaked around 2001, was one of the more violent cycles with many solar storms that affected power transmission lines, Earth satellites, GPS navigation, and other radio communications. Solar storms can interfere also with cell phone reception. Predictions have indicated that this solar cycle, which should peak near 2011, will be intense. David Hathaway and Robert Wilson of the Marshall Space Flight Center presented a paper at a 2006 meeting of the American Geophysical Union in which they predicted a sun spot number of 160 +/- 25 at maximum [4]. The sun, however, is sending mixed messages. Solar cycle 24 was expected to start a year earlier than it actually did, and cycles with such a late start are usually less intense. It's uncertainty like this that keeps scientists employed. To track the progress of Solar Cycle 24, see the references [5-6].

1. Tony Phillips, "Solar Cycle 24 Begins" (NASA Science Page, January, 10, 2008).
2. Maggie McKee, "Maverick sunspot heralds new solar cycle" (New Scientist Online, January 7, 2008).
3. Tony Phillips, "What's Wrong with the Sun?" (NASA Science Page, July 11, 2008).
4. Scientists Predict Big Solar Cycle (PhysicsOrg, December 22, 2006).
5. Solar Cycle 24 Prediction Web Site (NOAA Space Weather Prediction Center)
6. Homepage of the Solar Influences Data Analysis Center (Royal Observatory of Belgium)

July 30, 2008

Early Computer Music

As computers have evolved, so has the art of computer music. Since music is an analog signal, a fundamental problem is conversion of digital data to analog voltages. Nowadays, this is easy, since high resolution digital to analog converter chips can be purchased for just a few dollars, but this was a major problem in the early days of computer music. In a previous article (Musical Computers, July 06, 2007), I wrote how the radio interference generated by the digital switching circuitry in computers was used to generate tones from computer programs. This technique was pioneered by Johann Gunnarsson, an Icelandic engineer who worked with an IBM 1401 computer in the early 1960s [1-2]. The 1401 was IBM's first transistorized computer.

Of course, well-heeled organizations could have actual digital to analog converters and produce more musical compositions, some of which included synthesized singing voices. Many early computer music compositions were produced at Bell Labs, which had an extensive collaboration with composers from the Columbia-Princeton Electronic Music Center. This activity was an adjunct to Bell's research program in speech synthesis. Bell Labs produced computer music on an IBM mainframe computer as early as 1957, but its most famous composition is Daisy Bell, commonly known as "Daisy, Daisy." This song, synthesized in 1962 by physicist, John Larry Kelly, Jr., with electronic accompaniment by Max Mathews, was used in the film, 2001: A Space Odyssey.

Recently, the BBC has released a recording of what is believed to be the first recording of computer music [3-4]. This is in celebration of the sixtieth anniversary of Baby, one of the world's first computers, and the first stored-program computer. Baby had just 128 bytes of memory, which were stored on an unusual device called a Williams Tube. The Williams Tube was a modified cathode ray tube, and the memory was the persistence of charge in a dielectric. A more advanced commercial version of Baby, the Ferranti Mark 1 computer, was operating at the University of Manchester in 1951 when a BBC crew recorded the musical demonstration. The computer played Baa Baa Black Sheep, God Save the King, and part of In the Mood. The music was programmed by Christopher Strachey, a Manchester mathematician who had worked with Alan Turing.

Although this is the earliest extant recording of computer music, another computer is reported to have given a musical demonstration a few months earlier, but no recording of this is known to exist. This computer was the Australian CSIRAC (Council for Scientific and Industrial Research Automatic Computer), and the music was the Colonel Bogey March [3].

1. Art Inspired by IBM 1401 (Wikipedia).
2. Robert Andrews, "IBM 1401 Mainframe, the Musical" (Wired Online, June 29, 2007).
3. Jonathan Fildes, "Oldest computer music unveiled" (BBC Online, June 17, 2008).
4. Jonathan Fildes, "One tonne Baby marks its birth" (BBC Online, June 20, 2008).

July 29, 2008

Not Even Wrong

My son bought me several books for Father's Day, one of which was Lee Smolin's book, The Trouble with Physics [1]. I've just finished reading this book, and I can see why the physics message boards have been buzzing for the past year. This book is a serious indictment of String Theory, the fashionable activity of nearly every theoretical physicist and the latest attempt at a Theory of Everything. Smolin's book joins the ranks of another book, Not Even Wrong by Peter Woit [2]. Woit's title comes from Wolfgang Pauli's famous assessment of a proposed physics theory. So, what's the problem with String Theory? After all, it was deemed worthy of a special series on public television [3].

The problem is that String theory has been around for more than thirty years, and it's presently the labor of hundreds of theoretical physicists. In all that time, it has yet to make one prediction that can be verified by experiment. Not only that, it doesn't explain adequately anything about the world as already discovered by experimental physicists. It's just an idea and a tremendous amount of mathematics; and this is not ordinary mathematics, but mathematics in eleven dimensions. This is not the style of physics that Smolin and Woit grew up with. Physics has always been theory verified by experiment. Smolin argues from an intimate knowledge of the subject, since he authored eighteen papers on String Theory, but he's abandoned the field to work on Loop Quantum Gravity instead. Woit was a physicist, but he now teaches mathematics at Columbia University. It's not just the physics that bothers them, it's the present culture in theoretical physics, which rewards students who work in String Theory and punishes those who would work on competing theories. It's impossible for a non-string theorist to find a university appointment.

Another problem is the cult of the superman. Ed Witten of the Institute for Advanced Study, Princeton, New Jersey, is arguably a first-rate physicist. Witten earned a Ph.D. from Princeton, had a post-doctoral appointment at Harvard, became a full professor at the age of twenty-nine, was awarded a MacArthur Fellowship (a.k.a. "Genius Grant") in 1982, at age thirty-one, and became the first physicist to be awarded a Fields Medal in 1990. The Fields Medal is considered to be the mathematics equivalent of a Nobel Prize (Alfred Nobel didn't think mathematics qualified as a topic conferring a "benefit on mankind."). Witten is held in such esteem that when he checks a book out of the Princeton University library, it becomes an instant best-seller. It appears that the string theory community works only on specific problems, usually those highlighted as important by Witten.

String theorists are now laboring under a unique problem. Their theory predicts not just our one universe, but 10500 possible universes! A currently popular explanation of this is that all these universes exist. We're just in our present universe since it's one that supports intelligent life. This explanation, called the Anthropic Principle, is criticized by most physicists as being no explanation at all. Above all, the predictive aspect of theory is ignored when the Anthropic Principle is invoked. The most serious indictment of String Theory can be inferred from an incident called the Bogdanov Affair. Twin brothers, Grichka and Igor Bogdanov, published a series of String Theory papers in five peer-reviewed physics journals. These papers are believed to be carefully-crafted nonsense designed as a hoax to underscore problems with peer review and mathematical physics, although the authors have made no such admission. One assessment is that "The... papers consist of buzzwords from various fields of mathematical physics, string theory and quantum gravity, strung together into syntactically correct, but semantically meaningless prose. [7]."

1. Lee Smolin, "The Trouble With Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next" (Houghton Mifflin, September, 2006).
2. Peter Woit, "Not Even Wrong: The Failure of String Theory & the Continuing Challenge to Unify the Laws of Physics" (Jonathan Cape, April 25, 2006).
3. Nova: The Elegant Universe.
4. Jim Holt, "Unstrung" (New Yorker Online, September 9, 2006).
5. Adam Rogers, "Physics Wars" (Wired Online, September 2006).
6. Sean Carroll, "Guest Blogger: Joe Polchinski on the String Debates" (Cosmic Variance, December 7, 2006).
7. Jacques Distler, "Bogdanorama" (University of Texas, June 5, 2004)

July 28, 2008

Images from NASA

How did the Apollo astronauts photograph the moon? Still images on the moon were taken with 70 mm width photographic film in specially manufactured Hasselblad cameras. The cameras were specified for use over a -65oC to 120oC range. The specially-made thin base film was manufactured by Kodak, and it was produced in 160-shot color and 200-shot black-white cassettes. The Apollo cameras produced images in a square aspect ratio, unlike typical cameras that film in a 3:2 ratio of width to height. The useful image area was 2.25-inch square. Although holographic films resolve several thousand lines per millimeter, the resolution of photographic film is about 4,000 dots per inch. A simple calculation shows that the Apollo images have an equivalent resolution of about 20 megapixels. Since the dynamic range of photographic film (the range from darkest to lightest) is nearly two orders of magnitude greater than that of electronic imagers on commercial cameras, the quality of the Apollo lunar images is stunning.

NASA and the non-profit Internet Archive have just launched a web site, http://nasaimages.org/, that archives many historical space images, as well as video [1]. This archive merges many disparate NASA image collections into a single, searchable web site. There are images of the Apollo program and Hubble Space Telescope images. Aviation fans are not excluded, since there will be photographs of NASA experimental aircraft. A few images are available presently, and the goal is to have millions of images and many hours of video posted.

Here's a preview of some images (Note that the web site is still under development, and all images may not be available at all times):

Close-up view of astronaut's footprint in lunar soil.

Buzz Aldrin on the Moon.

Satellites for Sale.

The Mercury Seven.

John Glenn.

First Class of Female Astronauts.

Female Astronaut Judith A. Resnick.

Hubble Detailed Image of the Crab Nebula.

Voyager 1 Jupiter Great Red Spot.

Mars Spirit Rover Landscape.

Earthrise - Apollo 8.

Phobos from Mars Orbiter Camera.

1. Clement James, "Nasa opens up space image library" (PCAuthority.com, July 26, 2008).

July 24, 2008

The Quantity of Matter

Does matter really matter? Of all the physical constants in the universe, the one known with the least precision is the quantity of matter, also known as mass. We measure mass only by comparison with a crude physical object, the standard kilogram, maintained at the Bureau International des Poids et Mesures (International Bureau of Weights and Measures) outside Paris. As I wrote in a previous article (Standard Kilogram, June 28, 2007), the platinum-iridium alloy cylinder known as the kilogram prototype, may have lost or gained about 100 micrograms over the century of its existence. A hundred micrograms out of a kilogram may not sound like much, but this 100 parts per billion error is just not acceptable in a world in which most physical constants are known to ten or more decimal places. In an interesting example of self-reference, although this object has changed weight, it still weighs a kilogram, since it is a kilogram.

The growth of large, perfect silicon crystals has become common because of the utility of silicon in electronic circuitry. Silicon atoms in crystals are perfectly arrayed, and it's possible to measure the distance between silicon atoms in crystals to great precision using Xray diffraction. The idea naturally arose that a specific number of silicon atoms, as specified by an accurately fashioned cube or sphere of silicon, could be a better mass standard. Of course, the silicon would need to be isotopically pure and contain few non-silicon impurities. Natural silicon contains 92.23% silicon-28 (28Si), and an Avogadro number of 28Si would weigh 28 grams. This idea for replacement of the present mass standard is being pursued by an international collaboration called the Avogadro Project.

Since silicon is a light element, there are efficient ways to separate its isotopes, so large quantities of isotopically pure 28Si can be produced. There's also the economically efficient way of using isotope-separating gas centrifuges once used to separate uranium isotopes in the former Soviet Union. Last year, a group of German scientists at the Institute for Crystal Growth (Berlin, Germany) prepared a very pure crystal of 99.994% pure silicon-28 [1]. This year, this crystal has been fashioned into two kilogram-sized spheres by a team in Australia [2]. The diameter of these spheres is about 93.6 mm, and their weight matches the weight of the Australian copy of the standard kilogram.

These spheres are the most perfect spheres in the world. Since the goal is a 0.01 ppm error in volume, the diameter must be known to within 0.6 nm, about the size of an atom spacing in the crystal. These spheres are smooth to within 0.3 nm, and the curvature deviates from that of an ideal sphere by only 60-70 nm. If these spheres were the size of the Earth, they would be smooth to within 15 mm and would deviate only 5 meters in roundness. Research groups in Belgium, Italy and Japan will now measure the volume of the spheres using interferometry and their numbers of atoms using Xray diffraction.

All this is a lot of work for just two kilogram spheres, and the project has been criticized for just replacing one standard object with another. Another approach to a mass standard, championed by the US National Institute of Standards and Technology does not involve a physical object. This is the Watt balance [3-4], first proposed in 1975 and developed into a mass standard over the course of thirty years. It involves using the magnetic force between current-carrying coils, and it is presently capable of an error in the range of 10 ppb. I would choose the Watt balance over the silicon spheres because of its simplicity and low cost. The International Committee for Weights and Measures will make its choice in 2011.

1. Nicola Jones, "Silicon crystal cooked to perfection," Nature Online (29 May 2007).
2. Devin Powell, "Roundest objects in the world created" (New Scientist Online, July 1, 2008).
3. Laura Ost, " NIST Improves Accuracy of 'Watt Balance' Method for Defining the Kilogram" (NIST Press Release, September 13, 2005).
4. Replace Kilogram Artifact Now With Definition Based on Nature, Experts Say (NIST Press Announcement, February 14, 2005).

July 23, 2008

Not So Empty Space

Surprisingly, a region of space devoid of all atoms, a "perfect" vacuum, is not really empty. Empty space is only empty when averaged over a period of time. At any instant, it's a sea of virtual particles coming into and out of existence. According to quantum mechanics, we can divide space into cells of a very small size, called the Plank length (1.6 x 10-35 meter). These small regions of space have all the properties of a particle, including mass and spin. Because of quantum uncertainty, these particles are like the Chesire Cat in Alice's Adventures in Wonderland, appearing and disappearing, both there and not there. This sea of virtual particles is not merely a theoretical construct. It's experimentally detectable as the Casimir Effect [1].

The Casimir Effect is named after the Dutch theoretical physicist, Hendrik Casimir (1909 - 2000), who predicted that conductors spaced less than a micrometer apart will attract each other in a perfect vacuum. The simple explanation is that the region between the conductors form an electromagnetic resonator that will only support a subset of the modes of the vacuum state, so the vacuum outside this region, which supports all modes, will exert a pressure on the conductors. Two parallel plate conductors of one square centimeter area, when separated by a micrometer, will have a Casimir attraction of about 10-7 Newton. When the separation is reduced to 10 nanometers, the attractive force is about 10 Newtons, or one kilogram of force!

One experimental problem is that real surfaces are not very smooth. Also, most methods of putting a conducting metal on a surface give a roughness of about 50 nm. Surface roughness of the same order as the separation reduces the Casimir force by about a factor of two. These problems notwithstanding, the Casimir effect was crudely demonstrated using parallel plates in 1958 by one of Casimir's colleagues at the Philips Research Laboratories.

In 1997, the Casimir Effect was demonstrated to 95% accuracy using a conducting flat plate and sphere [2]. This experimental arrangement eliminated the problems of fabricating and aligning two perfectly flat parallel plates. Although the distance between the sphere and plate were small, the objects themselves were quite large. The sphere was four centimeters in diameter, and the circular plate was an inch in diameter. Modern instrumentation has led to new ways to demonstrate the Casimir Effect. Atomic force microscopy has shown agreement to theory within 1%. The Casimer Effect has also been shown to change the resonance frequency of a MEMS oscillator [3]. The Casimir Effect may specify a minimum size for MEMS devices [4].

Study of the Casimir Effect may lead to revelations about the structure of space-time. String Theory has predicted other dimensions beyond the classical four dimensions (three of extension and one of time). These extra dimensions are supposedly tightly curled onto themselves, so they are not seen in the "real" world. The Casimir Effect operates in the regime of these small dimensions, and accurate experiments may offer clues to the validity of String Theory [5-6].

1. Astrid Lambrecht, "The Casimir effect: a force from nothing." Physics Web (September 2002)
2. S. K. Lamoreaux, "Demonstration of the Casimir Force in the 0.6 to 6 μm Range," Phys. Rev. Lett. 78, pp. 5 - 8 (January 1997)
3. H. B. Chan, V. A. Aksyuk, R. N. Kleiman, D. J. Bishop, and Federico Capasso, "Nonlinear Micromechanical Casimir Oscillator," Phys. Rev. Lett. 87, 211801 (October 2001)
4. Saswato Das, "How a quantum effect is gumming up nanomachines," New Scientist, vol. 198, no. 2662 (June 28, 2008), pp. 28f.
5. M. Bordag, U. Mohideen, V.M. Mostepanenko, "New Developments in the Casimir Effect," Phys.Rept. 353 (2001) pp. 1-205 (arXiv.org) Note! 275 page PDF file!
6. The February 2007 issue of Physics Today has a more technical overview of the Casimir Effect (Steve K. Lamoreaux, "Casimir Forces: Still surprising after 60 years," Physics Today, vol. 60, no. 2 (February 2007), pp. 40-45).

July 22, 2008

The Fairer Scientist

Athena was an Olympian goddess of Greek mythology. She is the Olympian identified with science and engineering, since she was known as the goddess of wisdom and crafts. Her importance is underscored by the tradition that she was the daughter of Zeus, the king of the gods, and the goddess, Mepis. Athena had an unusual birth, having sprung from Zeus' forehead (What a headache!) after Zeus had incorporated the body of Mepis. The process for this incorporation is not quite clear, so the story is that Zeus "swallowed" Mepis whole. Mepis was a Titan and a goddess of crafty thought and wisdom; so, like mother, like daughter. It's interesting that science and engineering were identified with women at the time of these myths, more than 2,700 years ago.

It's very apparent to those of us working in science and engineering that there are very few women scientists and women engineers. I wrote on this topic in a previous article (Women in Science and Engineering, November 10, 2006). The number of women scientists and engineers in my generation is very small, but the situation in the US has been improving [1]. In fact, women claim 41% of entry-level jobs in industrial research, but many decide to quit in their mid-thirties [2]. The number of Ph.D. women working at universities has increased from 9% in 1973 to 33% in 2006. Physics, however, is a less attractive venue for women. Each year, about 1200 Physics Ph.D. degrees are awarded, but only about 185 of these (15%) are awarded to women. There does not appear to be any actual prejudice against women entering science and engineering fields. Women are just not interested, and the decision against a science and technology career comes very early, usually in high school.

So, what does the government do when it sees inequality, independently of its cause? It forces people to equalize. In the US there is a 1972 law, called not-so-descriptively, "Title IX," that forbids sexual discrimination in education. The impetus for passage of that law was sports, which is unfortunately more important to most students and their parents than education. However, the law is general enough to include all activities of any educational institution receiving federal support:

"No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance."

Now, some members of the US Congress and some women's groups believe that women face discrimination in certain sciences, and they want Title IX applied to the practice of science [3]. Three of the largest federal funders of education, NASA, the US Department of Energy and the National Science Foundation have audited physics and engineering departments at MIT and other major universities looking for such discrimination. Amber Miller, who leads the Experimental Cosmology group at Columbia University, has called her audit interview "a complete waste of time [4]." What is certain is this - many manhours of productive research time of the best US scientists and engineers are being consumed by additional paperwork.

Critics of the Title IX approach are fearful that a quota system could damage scientific research and it will actually harm the reputation of women in science. One approach taken by universities to achieve Title IX compliance in sports was not to add women's equivalent sporting teams, but rather eliminate men's teams. What if this approach were applied to scientific disciplines? There's sufficient evidence that women in science are quite successful. In a previous article (Ms. Carbon, March 12, 2007), I reviewed the career of Mildred Dresselhaus, who is a professor of Physics and Electrical Engineering at Massachusetts Institute of Technology (MIT). There's also a web site that presents eighty three women physicists who made important and original contributions in the first three-quarters of the twentieth century.

All this reminds me of an interesting anecdote (OK, everything reminds me of an interesting anecdote). Many years ago at Allied-Signal (now Honeywell), I attended a project review at which one of the managers asked the speaker if he would "dumb it down" to make his talk more understandable. He said, "Explain it like you would explain it to your mother." The speaker laughed and said, "My mother has a Ph.D. in Electrical Engineering."

1. This Blog, "US Science Scorecard 2008," (January 25, 2008).
2. Kelli Whitlock Burton, "Newsmakers: Datapoint, An Exodus of Women," Science, vol. 321, no. 5885 (July 4, 2008), p. 21.
3. John Tierney, "A New Frontier for Title IX: Science" (New York Times, July 15, 2008).
4. Yudhijit Bhattacharjee, "U.S. Agencies Quiz Universities on the Status of Women in Science," Science, vol. 315, no. 5820 (March 30, 2007), p. 1776.
Recognition of the Achievements of Women In Science, Medicine, and Engineering Web Site.

July 21, 2008

Efficient Photovoltaics

Many years ago I worked on a project to make a compact, high intensity, high resolution cathode ray tube (CRT) for projection displays. The application was the replacement of the huge cathode ray tubes used in shipboard radar systems. What made our CRT unique was its phosphor screen, which was made from a single crystal and not the conventional powder phosphor dispersed on glass. The use of a single crystal eliminated a host of difficulties, such as the phosphor burn and the limited phosphor resolution of powder phosphor screens. Of course, technology is never that easy, so there was a problem with the single crystal approach not found in powder screens; namely, waveguiding of the light to the edges of the screen. Beyond a critical angle, light generated in the phosphor crystal would waveguide itself to the edges of the screen. The critical angle for any material is easily calculated by Snell's Law. To reduce the loss of efficiency from waveguiding, we etched a microscopic pattern in the phosphor to focus the waveguided light to forward angles [1].

Electrical engineers at MIT have utilized this waveguiding effect to advantage in a novel photovoltaic solar cell architecture [2-5]. Their solar cells are stacks of organic dye coated glass that waveguide light to their edges. Conventional silicon photovoltaics are placed at the edges convert the waveguided light to useful electrical power. Since the quantity of the silicon collector photovoltaics needed for this architecture is small, high quality single-crystal silicon can be used. Single crystal silicon photovoltaics have a much higher efficiency than the polycrystalline silicon photovoltaics used in large area solar collectors. A similar waveguiding approach was tried in the 1970s, but the idea at that time was to use dyes that were volume-dispersed into plastic sheets, a technique common for scintillation detectors, as used for medical imaging, but not very efficient for optical waveguiding. The MIT team decided instead to place the dyes as thin films on the surface of the glass, thereby enhancing the waveguide efficiency.

The reported quantum efficiency of this novel architecture is impressive, since it approaches 50%. The power conversion efficiency is 6.8%. This may seem low, but it's greater than industry standard. The important feature of this new solar cell is its low cost compared to conventional photovoltaics. Furthermore, these new devices can be stacked atop existing solar cell arrays, allowing the existing arrays to continue producing power in the infrared spectrum, where they're most efficient, letting the waveguide cells harvest the shorter wavelength energy. The MIT team estimates that such a conversion would almost double the efficiency of existing photovoltaic plant at a small additional cost. Of course, one remaining problem is the lifetime organic dyes. Solar energy collectors need a 20-30 year lifetime, which is a stretch for an organic dye.

This research was funded by the National Science Foundation as part of its interdisciplinary program to bring photovoltaic efficiency up to that of nature's standard process, photosynthesis. The principals of this research have started a company, Covalent Solar, to commercialize these solar cells. Covalent Solar won $30,000 in prizes at a recent MIT $100K entrepreneurship competition. A report of this research appears in a recent issue of Science [5].

1. D.M. Gualtieri, H. van de Vaart, T. St. John, and R.C. Hebb, "High Luminance Monochrome Faceplates for Compact Projection Displays," Journal of the Society for Information Display, vol. 1, no. 2 (1993), pp. 123-127.
2. Alexis Madrigal, "See-Through Solar Hack Could Double Panel Efficiency" (Wired Online, July 10, 2008).
3. Joshua A. Chamot, "A Colorful Approach to Solar Energy" (NSF Press Release No. 08-118, July 10,2008).
4. Elizabeth Thomson, "MIT opens new 'window' on solar energy" (MIT Press Release, July 10, 2008).
5. Michael J. Currie, Jonathan K. Mapel, Timothy D. Heidel, Shalom Goffri and Marc A. Baldo, "High-Efficiency Organic Solar Concentrators for Photovoltaics," Science vol. 321, no. 5886 (July, 11 2008), pp. 226-228

July 17, 2008


Scientists have had great success with creating nanoscale matter over the past decade, so now they've turned to a complementary activity. This is the creation of the absence of matter on a nanoscale; namely, the creation of nano-bubbles. At this point, the bubbles themselves are not nanoscale, but their surface features are. A group led by Howard A. Stone at Harvard's School of Engineering and Applied Sciences has produced long-lived micrometer-sized bubbles in solutions of glucose, sucrose stearate, and water [1]. These bubbles are formed with polygonal surface features of about 50 nanometer dimension, and they're stable for up to a year. This is an unusual result, since micrometer-sized bubbles are unstable. In a typical bubbly liquid, larger bubbles grow at the expense of smaller bubbles, since the surface energy of the large bubbles is smaller than the combined surface energy of the small bubbles.

To make these bubbles with nanoscale surface features, the Harvard team utilized a few fluid mechanical tricks. The first was adding amphiphilic molecules to the mix. As the name implies, amphiphiles are neither hydrophobic nor hydrophilic. Common examples of amphiphiles are soaps and detergents, and these offer short-term stabilization of bubbles. The next trick was the use of a kitchen mixer. The mixing action produced a volume array of micrometer-sized bubbles stabilized by the nanoscale polygons at their surface. The surfactants in the mixture self-assemble to cover the bubble surfaces and form a mechanically stable shell that protects the bubble volume.

The Harvard research was sponsored by Unilever, many of whose products, such as aerated personal-care products, involve foams. Unilever scientists collaborated in this work, a report of which appears as a paper in Science [2].

While on the topic of bubbles, I'll mention another bubble process that interests physicists, sonoluminescence. Sonoluminescence is the light emitted when a bubble in a liquid collapses (implodes) as a result of high intensity sound excitation. This phenomenon was discovered in 1934 during research on sonar systems, and the equipment to produce this effect is so simple, it could be done as a high school science experiment. In 1989, research in this area was accelerated by a technique for producing sonoluminescence in an isolated bubble, which allowed more precise analysis of the implosion process. The bubbles are about a micrometer in size, and the light emission occurs in only a few tenths of a nanosecond. If the dissolved gas is a noble gas, such as helium, argon or xenon, the light intensity is increased, and the optical power can be as high as ten milliwatts. This doesn't sound like much power, but since it's generated in such a small volume, the gas temperature inside an imploding bubble can reach 20,000 K. Such high temperatures could possibly generate thermonuclear fusion. Although some papers have been published that indicate that such a fusion reaction does happen, this finding is still controversial [3].

1. Michael Patrick Rutter, "Engineers whip up the first long-lived nanoscale bubbles" (Harvard University Press Release, May 29, 2008).
2. Emilie Dressaire, Rodney Bee, David C. Bell, Alex Lips and Howard A. Stone, "Interfacial Polygonal Nanopatterning of Stable Microbubbles," Science, vol. 320, no. 5880 (May 30, 2008), pp. 1198-1201.
3. Erico Guizzo, "Bubble Fusion Research Under Scrutiny" (IEEE Spectrum Online (May, 2006).

July 16, 2008

Tequila Diamonds

My thesis advisor, Peter Ficalora, worked as a post-doctoral associate of John L. Margrave (1924-2003), who was a Professor of Chemistry at Rice University and a member of the US National Academy of Sciences [1]. Margrave's expertise was in chemistry at high temperature and high pressure, and he was founding editor of the journal, High Temperature Science; but he is best known for his work on fluorine chemistry. Poly-tetrafluoroethylene (PTFE, also known as Teflon), discovered accidentally in 1938, was the first important fluorocarbon compound, and it's still important today. Margrave extended fluorination to many other materials, including silicon, carbon nanotubes, and the high temperature lubricant, CFX-a, produced by fluorinating carbon. Margrave, along with his associate, Richard Lagow, developed and commercialized the LaMar process [2] (named for the inventors) which produced a fluorinated coating FluoroKote on plastic and paper articles, surgical gloves and rubber washers. Ficalora collaborated on a process to make fluorinated diamond [3], and he recalled that in the early days they would try to fluorinate almost anything. In one, probably unpublished, experiment, they fluorinated peanut butter.

Peanut butter is not the only strange starting material for a materials science experiment. A group of physicists at the Universidad Autónoma de Nuevo León and the Universidad Nacional Autónoma de México decided that their usual ethanol-water mixture wasn't that exciting for making diamond films on various substrates using pulsed liquid injection chemical vapor deposition, so they tried tequila, instead; specifically, Orendain brand tequila. This white tequila, made from the juice of the blue agave plant (Agave Tequilana), is an 80 proof mixture with a carbon:hydrogen:oxygen ratio of 0.37:0.84:0.29. Small quantities of impurity in this precursor are said to improve nucleation of the diamond film.

In the tequila process, the tequila was vaporized at 280 oC as it was introduced in four millisecond pulses of about five micoliters volume at a two Hertz rate into an argon carrier gas stream. This resulted in a chamber pressure of about five torr. Reaction on silicon wafers or type 304 stainless steel occurred at about 850 oC. The presence of diamond was verified by observation of the characteristic 1332 cm-1 Raman band of diamond. Diamond crystallites were spherical with a diameter of about 100-400 nm.

This isn't the first time such a strange precursor has been used in diamond synthesis. Constance Holden reports in Science [6] that when diamond CVD technology was first established, Japanese scientists used sake, Russians used vodka, Americans used whiskey, and the grease from a lamb kebob was used once as a carbon source. Of course, there's the company that will turn the remains of your deceased loved ones into diamonds.

Since we're talking about diamonds, I'll mention one of the compact discs in my music collection. This is Diamond Music, a 1995 composition by Karl Jenkins. In my estimation, Jenkin's compositional style is much like that of Nino Rota, another composer represented in my music collection.

1. Rice mourns Chemistry's Margrave (Rice University Press Release, January 15, 2004).
2. John L. Margrave and Richard J. Lagow, "Process for the Production of Hydrolyticallay Resistant Fluorocarbons," US Patent No. 3,758,450 (September 11, 1973).
3. John L. Margrave, Renator G. Bautista and Peter J. Ficalora, "Chemical Methods for Producing Diamonds and Fluorinated Diamonds," US Patent No. 3,711,595 (January 1, 1973).
4. Tequila is surprise raw material for diamond films (New Scientist Online, June 20, 2008).
5. Javier Morales, Miguel Apatiga, Victor M. Castano, "Growth of Diamond Films from Tequila" (arXiv Preprint, June 9, 2008).
6. Constance Holden, "Paracelsus, Eat Your Heart Out," Science, vol. 320, no. 5884 (June 27, 2008), p. 1701.

July 15, 2008

Einstein Right Again

Pulsars are dense, compact neutron stars detected by their periodic radio emission. They were discovered forty years ago [1]. Now, an international collaboration of astronomers from Canada, the United Kingdom, France, Italy and the United States have provided another proof of Einstein's Theory of General Relativity by measuring the orbits of a double pulsar named PSR J0737-3039A/B [2-4]. This double pulsar is composed of two neutron stars that rotate around each other. A chance alignment of their orbital plane gives a thirty second eclipse of the "A" pulsar as it moves behind the "B" pulsar every orbit, and this eclipse was essential to get the data required for the Relativity test. Since these neutron stars are dense and are closely spaced, General Relativity predicts a high curvature of space at their orbits that should cause a precession (wobble) of the stars' spin. A precession of about 5 degrees per year was found, which is within 13% of the prediction from General Relativity and well within any experimental error. These stars are expected to collapse into each other in about 85 million years.

The official discovery of pulsars [1] was forty years ago, on August 6, 1967, but they were observed earlier. The scientific discovery of pulsars was made by Jocelyn Bell, who was a graduate student of Antony Hewish, but they were seen by a US Air Force Sergeant, Charles Schisler, a few months earlier. Schisler, who operated a radar system in the Ballistic Missile Early Warning System (BMEWS) system at Clear Air Force Station, Alaska, noticed a blip that advanced in position four minutes each day, a sure sign of an extraterrestrial source. His observations were at 420 MHz, and his radar system was fast enough to display the fact that the source varied in intensity with a pulse repetition rate of 1.3 seconds.

Schisler kept a log book of his observations and he visited the University of Alaska Fairbanks to find that the coordinates of his source matched the Crab Nebula, a known radio source, but not yet identified as a pulsar. Schisler discovered other celestial radio sources, as well, including some not yet discovered by radio astronomers. After the announcement of the Hewish-Bell discovery, Schisler realized what he had observed. For security reasons, he wasn't able to publish his "pre-discovery" until 2007, when he revealed details at the 40 Years of Pulsars Conference, Montreal, Canada, introduced by Jocelyn Bell herself [7]. Perhaps the conference venue, outside the US, made the announcement easier. Schisler reported that other BMEWS radars had detected pulsating extraterrestrial sources as early as 1964, but no record was made of their detection.

1. A Hewish, S.J. Bell, J.D.H. Pilkington, P.F. Scott and R.A. Collins, "Observation of a Rapidly Pulsating Radio Source," Nature, vol. 217 (February 24, 1968), pp. 709-713.
2. Rachel Courtland, "Pulsar's wobble provides new Einstein test" (New Scientist Online, July 3, 2008).
3. In Unique Stellar Laboratory, Einstein's Theory Passes Strict, New Test (NSF Press Release, July 3, 2008).
4. Rene P. Breton, Victoria M. Kaspi, Michael Kramer, Maura A. McLaughlin, Maxim Lyutikov, Scott M. Ransom, Ingrid H. Stairs, Robert D. Ferdman, Fernando Camilo and Andrea Possenti, "Relativistic Spin Precession in the Double Pulsar," Science, vol. 321, no. 5885 (July 4, 2008), pp. 104-107
5. Geoff Brumfiel, "Air Force had Early Warning of Pulsars," Nature, vol. 448, no. 7157 (30 August 2007), pp. 974-975.
6. Air force had early warning of pulsars (Physics Today News Pick (August 31, 2007).
7. Steinn Sigurõsson, "How the US Air Force failed to win the Nobel prize..." (Science Blogs, August 18, 2007).
8. Aerial view of Clear Air Force Station, Alaska (Google Maps, 64°17'19"N, 149°11'22"W).

July 14, 2008

The Large Hadron Collider

When Robert Oppenheimer witnessed the first nuclear explosion, he remembered a phrase spoken by Vishnu in the Bhagavad Gita, "Now I am become Death, the destroyer of worlds." The Trinity Test, the first man-made nuclear explosion, may have seemed like the end of the world to many of the physicists who witnessed it, but some present generation physicists are worried that their newest particle accelerator, the Large Hadron Collider (LHC), might actually destroy the Earth, and perhaps the universe as well. This is a recurrent worry of the high energy physics community, but they've always calculated the potential risks before undertaking any grand new venture.

There was concern before the Trinity Test that the nuclear reaction of the test device might trigger a runaway fusion reaction in Earth's nitrogen atmosphere, thereby setting the world on fire. A calculation by Hans Bethe showed this would not happen. Similar fears emerged when the Relativistic Heavy Ion Collider (RHIC) came online nearly a decade ago [2, 3]. This latest particle accelerator was designed to produce collisions between particles at terrific energy to produce new subatomic species for study, but there was the worry that some of these particles would be perhaps too exotic. A team of physicists researched the possibility that particle collisions in the RHIC might trigger some catastrophic events [4]:

• Formation of a microscopic black hole that would devour the planet. The team concluded that the energies were at least 1022 too small for this to happen.

• Collapse of the vacuum. The vacuum is not empty space. It's seething with virtual particles that emerge into reality and then quickly disappear. The vacuum may be in a stable state, but not in the most stable state, so a high energy event may precipitate a phase transition that would affect the entire universe. They calculated that about 1047 cosmic ray collisions have occurred at RHIC energy in the universe compared with the 1011 such collisions expected over the lifetime of the RHIC. Since collapse of the vacuum hasn't happened because of cosmic rays, then it's not expected to happen.

Strangelets. They spent most of their time studying the possibility that the RHIC might produce strangelets. Strangelets, as their name indicates, are unnatural particles that may accrete other matter. They argued that the existence of the moon, proven to be normal matter by actual human contact even after billions of years of exposure to cosmic rays, made the strangelet scenario unlikely.

A subsequent study looked closely at cosmic ray and particle accelerator energies. The authors of that study derived a 99.9% confidence limit that any sort of catastrophic event would be unlikely to occur in the next billion years, whether natural or man-made [5]. Not mentioned in the RHIC report, but in the subsequent report, was the possibility for creation of magnetic monopoles that might cause protons to decay. Once again, cosmic rays set a limit on such a possibility.

Some physicists were not as confident about the LHC, so they filed a lawsuit on March 21, 2008, in a U.S. court to compel the US Department of Energy, Fermilab, the National Science Foundation and CERN to stop work on the LHC until a safety assessment can be done [6]. Initial arguments were filed by both the plaintiffs and the government on June 24, 2008, and a hearing will be held on September 2, 2008, to determine whether the case should proceed [7]. CERN has launched a public website to present arguments against a doomsday scenario [8].

1. J. Robert Oppenheimer "Now I am become death..." (Interview Clip, The Atomic Archive).
2. Mark Buchanan, "The final frontier?," Nature Physics, vol. 4, no. 6 (June 2008), p. 431.
3. Robert Matthews, "A black hole ate my planet" (New Scientist Online, August 28, 1999).
4. R.L. Jaffe, W. Busza, J.Sandweiss, and F. Wilczek, "Review of Speculative 'Disaster Scenarios' at RHIC" (arXiv, October 13, 1999; revised July 14, 2000).
5. Max Tegmark and Nick Bostrom, "How unlikely is a doomsday catastrophe?" (arXiv, December 8, 2005; revised December 21, 2005).
6. Alan Boyle, "Doomsday fears spark lawsuit" (MSNBC, March 27, 2008.
7. Alan Boyle, "Doomsday lawsuit dissed" (MSNBC, June 24, 2008).
8. The safety of the LHC (CERN).

July 10, 2008

Television Destroyed My Planet

One of the most memorable statements of the 1960s was made by Newton Minow, former Chairman of the US Federal Communications Commission, in what's today known as the Wasteland Speech (May 9, 1961).

I invite you to sit down in front of your television set when your station goes on the air and stay there for a day without a book, magazine, newspaper, profit-and-loss sheet or rating book to distract you, and keep your eyes glued to that set until the station signs off. I can assure you that you will observe a vast wasteland.

According to recent news coverage, television may make the Earth into a vast wasteland through global warming. The culprit is a gas used in the manufacture of flat-screen televisions [1-3].

Nitrogen trifluoride (NF3) is a gas used for plasma etching of materials since it decomposes into nitrogen and fluorine in the plasma. Fluorine, of course, is an excellent etchant since it reacts readily with just about everything, and many fluorides are volatile at room temperature. Interestingly, fluorine will not react with nitrogen to create NF3, but it can be formed by reaction of ammonia and fluorine. In 1992, less than 100 tons of NF3 were produced worldwide, but it's so useful in electronics manufacturing that 4,000 tons were produced in 2007. Production is expected to double each year because of its use in the manufacture of flat-screen display panels for television receivers.

The problem is that nitrogen trifluoride is a potent greenhouse gas. Its greenhouse warming potential is between 16,800 - 17,200 times greater than that of CO2, since it has a lifetime in the atmosphere of between 550 - 740 years. Although NF3 is broken-down into its constituent elements during use, unless the reaction yield is 100%, or chemical scrubbers are perfectly efficient, there will be some release into the atmosphere. Nitrogen trifluoride is not regulated by the Kyoto Protocol. Air Products and Chemicals, a major producer of the gas, says that little nitrogen trifluoride is released into the atmosphere. Things were worse. Nitrogen trifluoride replaced hexafluoroethane and sulfur hexafluoride as plasma etching agents, and these are worse greenhouse gases still.

The nitrogen trifluoride problem was highlighted by Michael Prather, director of the Environment Institute at the University of California, Irvine, and his colleague, Juno Hsu, in a recent issue of Geophysical Research Letters [4].

1. Ian Sample, "Environment: Climate risk from flat-screen TVs" (The Guardian, July 3, 2008).
2. Elsa Wenzel, "LCD making worse for environment than coal?" (CNET.com, July 3, 2008).
3. Flat-Screen TV Gas 'a Climate Time Bomb' (Fox News, July 04, 2008).
4. Michael J. Prather and Juno Hsu, "NF3, the greenhouse gas missing from Kyoto," Geophysical Research Letters, vol. 35 (June 26, 2008), L12810, doi:10.1029/2008GL034542.

July 09, 2008

Mathematical Tables at NIST

NIST is the US National Institute of Standards and Technology. It was known as the National Bureau of Standards until 1988. The name was changed to reflect its major role in promoting US technical innovation, although it's still true that you can buy standard bottles of peanut butter (SRM 2387, Peanut Butter) for the bargain price of just six hundred dollars for half a kilogram. A major activity of NIST is the publication of reference books, one of which I mentioned in a previous article (For Your Reading Pleasure, May 8, 2008). This is the Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables by Milton Abramowitz and Irene Stegun [1], a thousand page book first published in 1964 just as I was completing high school. That book, and the mathematics sections of the CRC Handbook, were invaluable references during my undergraduate and graduate years. Since there is no copyright on the Abramowitz and Stegun publication, it's freely available in many places on the Internet [2].

Sales of this handbook by the US Government Printing Office exceed 150,000 copies, and it's estimated that six times as many reprint versions have been sold by commercial publishers, notably Dover Publications [3-6]. A hundred thousand copies were printed in its first few years from 1964-1968 [5], and they were sold by the US Government Printing Office for about ten dollars each, about a penny per page. The handbook has been reprinted by such varied international publishers as Nauka and Verlag Harri Deutsch, and it's had more than 20,000 citations [6].

NIST began a Mathematical Tables Project in 1938. The project employed not just mathematicians, but also a large staff of "computers." Computers in those days were people operating hand calculators. The Mathematical Tables Project produced thirty-seven volumes of the NBS Math Tables Series between 1938 and 1946. These included tables for the trigonometric functions, the exponential function and natural logarithms. In 1954, the NBS and NSF organized a two-day conference on tables at MIT, and this served as the planning meeting for the Handbook. A planning committee, which included Abramowitz and such mathematical luminaries as Richard W. Hamming and John W. Tukey, obtained NBS and NSF support for the Handbook project, which began officially in December, 1956. Handbook chapters were written by NBS staff and paid academic consultants. Twelve chapters were completed by July, 1958, when Abramowitz suffered a fatal heart attack. Management of the handbook project fell to Irene Stegun, who was Assistant Chief of the NBS Computation Laboratory. The Handbook was completed in 1964.

The handbook is more than forty years old, but the nice thing about mathematical knowledge is its permanence. However, a thousand page print book is somewhat of an anachronism in the Internet age, so NIST is working on a successor to the Handbook called the Digital Library of Mathematical Functions (DLMF) [7]. DLMF, which will be available freely online, is based on a new survey of the mathematical literature and will include interactive graphics. The DLMF content owners, who have been working on this project for more than ten years, are W. J. Olver, Daniel W. Lozier, Ronald F. Boisvert and Charles W. Clark. A five-chapter preview is available at http://dlmf.nist.gov/. The DLMF will eventually have thirty six chapters.

One innovation of DLMF will be its use of the virtual reality markup language (VRML) for its interactive graphics. The DLMF web site should be completed in early 2009, and it's expected to contain twice the content of the Handbook. Its 9,000 equations and 500 figures will be available also in another thousand page printed handbook. I mentioned "Engineer's Elbow" in a previous post (Scientist Programmers, June 09, 2008). That was a malady, common during my undergraduate years, brought about by carrying heavy boxes of computer punch cards. We may have a new era of "Scientist's Shoulder" if this massive tome becomes popular.

1. M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions With Formulas, Graphs, and Mathematical Tables, National Bureau of Standards, Applied Mathematics Series - 55 (Superintendent of Documents, U.S. Government Printing Office Washington, D.C. 20402).
2. You can find the handbook at these web sites, http://www.math.sfu.ca/~cbm/aands/, http://mintaka.sdsu.edu/faculty/wfw/ABRAMOWITZ-STEGUN/index.htm, http://www.nrbook.com/abramowitz_and_stegun/, and many other places. A massive PDF file (60 MB, not recommended for the faint-hearted) can be found at http://www.math.sfu.ca/~cbm/aands/dl/Abramowitz&Stegun.pdf
3. Milton Abramowitz and Irene A. Stegun, Eds., Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables (Paperback, 1046 pages, Dover Publications, June 1, 1965, via Amazon.com)
4. NIST at 100: Foundations for Progress - 1964:Mathematics Handbook Becomes Best Seller (NIST).
5. Lewis M. Branscomb, Preface to the Ninth Printing (November, 1970).
6. Handbook of Mathematical Functions.
7. Ben Stein, "NIST releases preview of much-anticipated online mathematics reference" (NIST Press Release, June 26, 2008).

July 08, 2008

Roll Your Own

All of us have noticed that when we roll a piece of paper, plastic, or other stiff, but pliant, material, the inner coil doesn't contact the outer roll completely. The inner coil separates from the outer roll about half a turn through, leaving a gap. This happens not because of our imperfect coiling technique. It always happens. When physicists hear the phrase, "always happens," they start to think that there's a universal law waiting to be discovered. In fact, that's the case with coiling. As summarized in a news article in a recent issue of Nature [1], physicists at the University of Santiago, Chile, have discovered that the contact angle the inner edge makes to the outer coil is always 24.1o, regardless of the sheet thickness or the coil diameter [2-3]. As long as the sheet can be coiled without plastic deformation, this universal behavior is independent of the material, also.

The Chilean physicists performed experiments on a variety of materials, including thin sheets of mica and metal, over a wide range of coiling diameters. They discovered that the contacting angle was held to within a degree of the universal angle they calculated from first principles mechanics. You would think that such a calculation would have been done already, considering the long history of mechanics, but the Chilean team was the first. Enrique Cerda, a member of the research team, thinks the problem would have been accessible to an eighteenth century mathematician, but today's computers help with some of the calculations. The essential mechanism at work here is the geometrical constraint caused by the coiling. The universality occurs simply because the problem reduces to a geometry problem in which the material is a casual bystander.

1. Philip Ball, "Universal law of coiling: Physicists reveal why paper curls the way it does," Nature, vol. 453, no. 7198 (June 19, 2008), p. 966.
2. V. Romero, T. A. Witten and E. Cerda, " Multiple coiling of an elastic sheet in a tube," Proc. R. Soc., DOI: 10.1098/rspa.2007.0372 (June 10, 2008).
3. Victor Romero, Enrique Cerda, T. A. Witten, Tao Liang, "Force focusing in confined fibers and sheets" (arXiv Preprint, April 22, 2008); available as a PDF file.

July 07, 2008


My children were often astounded (well, perhaps "amused" would be the better word) how I would calculate probable values for just about anything with a number attached. When they were looking for colleges to attend, I was able to calculate the minimum SAT score needed for admission from some seemingly irrelevant facts in the college's promotional literature (e.g., "Half of our students have SAT scores greater than X, and one in five have a score greater than Y"). I'm also the guy who has the exact change ready for the clerk, sales tax included, before she rings up the sale; and I estimate the value of our grocery order within a few percent by visually scanning the items in our shopping cart. I don't consider any of this to be extraordinary, since I'm sure all physical scientists could do similar things, but their passions reside elsewhere.

I thought about estimating while listening to the radio last week. In anticipation of the Independence Day holiday, a local "Oldies" station (WCBS-FM, New York) was doing a holiday special, "CBS-FM's Seven Day Weekend," in which they played music in alphabetical order, by title, from A to Z. This radio event started at 9:00 AM on Monday, June 30, 2008 [1]. Researching their playlist, I found that they had played 88 songs with a title starting with A. How many would they play in all, and how long would it take to go through one cycle (they promised to restart at A when they completed Z)?

In a previous article (Morse Code, July 3, 2008), I wrote how Samuel F. B. Morse devised his telegraph code so that more common letters are easier to transmit than the less common letters. As we all know, certain letters are used more often than others. If all letters were used equally, then each would occur about four percent of the time. In reality, e and t are used a combined 21.8% percent of the time, and the letter a is used a respectable 8.167% of the time. Although it may not be right to assume that the starting letters of words have the same frequency of use, and that song titles are representative of language in general (I suspect "love" would be over-represented), let's calculate (Note that in their playlist the starting articles A, An and The are not considered to be part of the alphabetical arrangement).

If 88 songs are 8.167% of the total, there will be 1077 songs. Allowing for commercial breaks, etc., about fifteen songs are played per hour, so our estimate is that a cycle will take 71.8 hours, or about three days, exactly. It appears that there will be enough time for two complete cycles (six days), and I wouldn't be surprised if they arranged the number of songs to give exactly two cycles. At 10:00 am on Friday, July 4, 2008, as I write this article, they were already at the start of M on the second cycle. My estimate appears to be closer than that of the station's program director, who estimated 3,000 songs [1]. By the way, they played 46 songs with titles starting with "Love."

1. David Hinckley, "Countin' down oldies on 101.1 WCBS-FM's 'A to Z Countdown'" (New York Daily News, June 30, 2008).

July 03, 2008

Morse Code

I mentioned Samuel F. B. Morse in a previous article about telegraphy (Transatlantic Communications, August 15, 2007). His telegraph code, the eponymous Morse code is probably known to many of you, but not in any detail. It was specifically a code in which a succession of short and long electrical currents would allow transmission of alphabetic and numeric characters from one location to a remote location. Over the years it's evolved into a system in which not just currents, but long and short tones, light flashes, etc. are used to convey information. A long duration tone is called a "dash," and a short duration tone is called a "dot." They're represented by the — and • symbols.

Since certain letters are used more often than others, Morse created his code to make the more common letters easier to transmit. Thus "e" (frequency of occurrence 12.7%) is a single dot, and "t" (9.1%) is a single dash. The modern usage of Morse's code is slightly different than his original code [1], patented in June, 1840, but it follows the same principle that less common characters are represented by longer sequences. Thus, "z" (0.074%) is dash-dash-dot-dot, and "q" (0.095%) is dash-dash-dot-dash. Although four symbols can give you just 24 characters when taken four at a time, Morse code letter sequences are of variable length. This multiplies the code space, since different letters can be coded as a sequence of one, two, three, or four symbols. It's easy to calculate how many characters can be encoded in a sequence of four or fewer symbols:

One symbol = 2
Two symbols = 4
Three symbols = 8
Four symbols = 16
Total number = 2 + 4 + 8 + 16 = 32 = 2n+1

This is enough to code the English alphabet of 26 characters, but not enough to code both the alphabet and ten numbers. Numbers use five symbols each in a logical sequence. Morse code was used for wire communications by Western Union starting in 1861, after the Morse patent expired. Western Union abandoned its telegram service on January 27, 2006 [2]. This was one month after the Federal Communications Commission abandoned its Morse code requirement for amatuer radio operator licensing [3-4]. The United States Coast Guard stopped monitoring Morse Code distress calls in 1993.

Samuel F. B. Morse had a connection to my home town of Utica, New York. The first commercial transmission via the Morse Magnetic Telegraph is thought to have been between Utica and Springfield, New York [5]. Morse's son was married in Utica in June, 1848, and Morse married Sarah Elizabeth Griswold, who was his daughter-in-law's cousin, in Utica on August 10, 1848. Samuel and Sarah had four children.

1. Tony Long, "June 20, 1840: A Simple Matter of Dots and Dashes" (Wired News, June 20, 2008).
2. Tony Long, "End of an Era" (Wired News, January 26, 2007).
3. Paul Saffo, "Morse Code - Dead Language, Bright Future" (Paul Saffo Journal, December 16, 2006).
4. FCC Drops Morse Code Requirement (Slashdot, December 16, 2006).
5. Malio Cardarelli, "Telegraph inventor found success, love in Utica" (Utica Observer-Dispatch, January 15, 2007).

July 02, 2008

Environmental Clean-Up Kit

One trouble with using a resource found half a world away from where it's mined is the inevitable spillage during transport. In the case of oil, this spillage generally happens at sea. You would think that a valuable, and environmentally harmful, resource like oil would be carefully shipped, but 200,000 tons of oil have been spilled since the year 2000. One technique proposed to reduce such spillage is the double-hulled oil tanker. By international convention, single-hulled oil tankers will be phased out by 2026, but that's quite a way into the future. The only near-term option is remediation. It would be nice if oceanic oil spills could be remediated like kitchen oil spills, with a paper towel.

Francesco Stellacci, an associate professor in the Department of Materials Science and Engineering at MIT, and his research team have developed what amounts to a paper towel for absorbing oil spills. Their nanowire mesh, which is made from a potassium manganese oxide [2], absorbs about twenty times its weight in oil, and it's made by the same technique used to make paper. A suspension of nanowires is captured on a plate to produce a sheet of hydrophobic material that will entrap organics such as oil by capillary action between the nanowires. The material is so resistant to water that these scientists call it "superhydrophobic." Since the potassium manganese oxide is stable at high temperature, the sheet can be heated to volatilize and recover the oil while keeping the sheet intact for reuse.

The potassium manganese oxide material would make a good water filter, also, and the starting materials for its manufacture are inexpensive. This research is described in a paper published in Nature Nanotechnology [3].

1. Elizabeth A. Thomson, "MIT develops a 'paper towel' for oil spills - Nanowire mesh can absorb up to 20 times its weight in oil" (MIT Press Release, May 30, 2008).
2. Probably not potassium permanganate; I don't have access to their full paper.
3. Jikang Yuan, Xiaogang Liu, Ozge Akbulut, Junqing Hu, Steven L. Suib, Jing Kong and Francesco Stellacci, "Superwetting nanowire membranes for selective absorption," Nature Nanotechnology, vol. 3 (May 30, 2008), pp. 332-336.

July 01, 2008

Polyethylene-Eating Bacteria

Science is interesting, but pure science rarely pays the bills. Fundamental science, as practiced (less, nowadays) in the universities is important to scientific progress, but the pay-off is too far in the future for corporate funding. Funding of the most fundamental studies is relegated to the "N" organizations, such as the NIH and NSF; and the occasional "D" (DARPA and DOE).

Biotechnology is an important emerging industry that makes use of the genetic code as elucidated over the course of fifty years of fundamental research. The knowledge of how DNA functions, and the ability to manipulate DNA to our advantage, has fueled tremendous advances in biotechnology. The principle of natural selection, as espoused in Darwin's works, is a basic scientific idea you wouldn't think would ever be profitable. When it's used in an experiment by a secondary school student with a few hundreds of dollars in funding to produce a useful piece of technology, it humbles scientists everywhere.

I mentioned the evolution of bacteria in a previous article (Contingency, June 4, 2008). Bacteria are useful chemical factories, and they are a convenient resource for the study of genetic mutation, since there are roughly two thousand generations of a typical bacteria in a year. Bacteria will adapt to changes in their environment as demonstrated in a classic paper by Brown, et al. [1], who were able to adapt baker's yeast to life in a glucose-limited environment in just 450 generations. A similar technique had been applied earlier to the creation of bacterial strains to desulfurize coal [2], and other strains have been produced to remediate hydrocarbon-contaminated soil. Since bacteria will evolve to eat such vile stuff in just a few generations, could bacteria be trained to "eat" plastic?

This was the question that Daniel Burd, a sixteen year-old high school student from Waterloo, Ontario, Canada, asked [2]. He was specifically interested in reducing the litter from the polyethylene grocery bags that have replaced biodegradable paper bags over the last few years. It's estimated that 500 billion such bags are manufactured each year. Burd designed a simple experiment in which he added powdered polyethylene bags to solutions of yeast and water. The solutions were shaken and maintained at a slightly elevated temperature to promote growth. He increased the concentration of powdered polyethylene in steps, and his final bacterial solution was able to reduce the weight of polyethylene by 17% in just six weeks.

Burd then isolated four major bacterial strains from his solution and found that only one of the four, a Sphingomonas bacteria, was the plastic eater. He found also that the addition of one of the other strains, a Pseudomonas bacteria, increased the polyethylene-eating activity slightly for an unknown reason. By optimizing the temperature and nutrient environment, Burd's bacterial solution could dissolve 43% of the polyethylene in six weeks. Not surprisingly, Burd's science project won first prize at the Canada Wide Science Fair in Ottawa.

Of course, every scientist should beware of the Law of Unintended Consequences. Some Internet message boards have cautioned that such a plastic-eating bacterium, if it escapes into the environment, could eat all our precious iPods and mobile telephones. There's also the problem that the product of all this plastic-eating is more carbon dioxide to accelerate global warming.

1. C. J. Brown, K. M. Todd, and R. F. Rosenzweig, "Multiple duplications of yeast hexose transport genes in response to selection in a glucose-limited environment," Molecular Biology and Evolution, vol. 15, no. 8 (August 15, 1998), pp. 931-42.
2. Yosry A. Attia and Mohamed A. Elzeky, "Coal desulfurization using bacteria adaptation and bacterial modification of pyrite surfaces," U.S. Patent No. 4,775,627, October 4, 1988.
3. Karen Kawawada, "WCI student isolates microbe that lunches on plastic bags" (The Record, May 22, 2008).