« October 2007 | Main | December 2007 »

November 30, 2007


The mineral iron pyrite (iron sulfide, FeS2) is known also as "fool's gold," since flecks of iron pyrite resemble gold flecks in a miner's pan. Only upon further study, perhaps heating the pyrite to release its sulfur smell, does one discover that appearances can be deceiving. This same was true in the classification of life on Earth. At one time, there were thought to be just two "domains" of life; that is, life forms so distinct from each other that they deserved different names. These were the Linnaean "super-kingdoms" of "prokaryotes" and "eukaryotes." The distinction between these domains is quite clear - Eukaryote cells (animals, plants, fungi) have a nucleus, and prokaryote cells (bacteria) do not. This distinction is reflected also in the names, since karyon (καρυον) is the Greek word for nut or kernel. This classification changed thirty years ago with the introduction of a third domain, archaea by Carl Woese [1].

As many scientific discoveries, the discovery of archaea was almost an accident; and also like most unexpected scientific discoveries, this new scientific paradigm met with resistance from the established academics. Ralph Wolfe, a colleague of Woese, was interested in the methanogens, bacteria that differ from the usual kind since they produce methane instead of carbon dioxide in metabolism. The methanogens were not well studied in the 1970s, since they are poisoned by oxygen and are difficult to culture. Wolfe was able to grow enough of one of these, M. bryantii, for Woese to sequence its ribosomal RNA (rRNA). Woese had sequenced other bacteria, and he found that this metanogen was quite different. Sequencing other methanogens confirmed that they were so unlike other bacteria to be a distinct domain. Wolfe, himself, was initially skeptical, since the metanogens looked like other bacteria. Of course, "all that glitters is not gold," in the analogy to fool's gold. This discovery was published in the Proceedings of the National Academy of Sciences in two articles in 1977 [2-3].

Woese continued work along similar lines after this initial foray into the primitive nature of organisms. One of his further conjectures is that speciation was not a factor in the early development of life on Earth. He proposed that lateral gene transfer between organisms was common during the early evolution of life. In 2003, Woese won the Crafoord Prize in Biosciences of the Royal Swedish Academy of Sciences in 2003 for his discovery, and collected a $500,000 cash award. And you thought science doesn't pay well!

1. Diana Yates, "Symposium marks 30th anniversary of discovery of third domain of life" (University of Illinois Press Release, October 16, 2007).
2. George E. Fox, Linda J. Magrum, William E. Balch, Ralph S. Wolfe, and Carl R. Woese, "Classification of Methanogenic Bacteria by 16S Ribosomal RNA Characterization," Proc. Natl. Acad. Sci., vol. 74, no. 10 (October 1, 1977), pp.4537-4541.
3. Carl R. Woese and George E. Fox, "Phylogenetic Structure of the Prokaryotic Domain: The Primary Kingdoms," Proc. Natl. Acad. Sci. vol. 74, no. 11 (November 1, 1977), pp. 5088-5090.
4. Hidden Before Our Eyes Symposium Web Site.
5. Carl R. Woese Home Page.

November 29, 2007

Science Quiz for US Presidential Candidates

Nature published a multiple-choice science quiz recently on topic areas with which a future US president should be familiar. These questions are hard! I knew the answers to the questions relating to the physical sciences, but I didn't score well answering the others. One problem was that most of the questions required a quantity as an answer. The choices were spaced over orders of magnitude, but I'm not as well read in the non-physical areas of science. It might be a stretch to expect an educated layman to know the answers to most of these.

Here are the questions relating to the physical sciences. An online quiz can be found here, and all the answers can be found here. In my estimation, if you get at half of these right, you should at least qualify for Vice President.

Note - The numbering is from the original quiz, and the answers to these selected questions appear below the references.

1) Electricity from the wall plug costs about 10 cents per kilowatt-hour (kWh). If you were to get the same electricity by buying AAA alkaline batteries at the local store, the cost of that electricity would be:
(a) 15 cents per kWh
(b) 94 cents per kWh
(c) $2.50 per kWh
(d) $1,000 per kWh

3) The highest achieved efficiency (solar energy converted to electrical energy) of solar cells is approximately:
(a) 4%
(b) 15%
(c) 28%
(d) 41%

4) A typical high-resolution spy satellite has how long to photograph a location?
(a) 10 seconds
(b) 1 minute
(c) 12 minutes
(d) 90 minutes

6) Compared with a gallon of gasoline, the energy supplied by a gallon of liquid hydrogen is approximately:
(a) One-third (that is, it has less energy per gallon)
(b) The same energy per gallon
(c) Three times more energy per gallon
(d) 12 times more energy per gallon

7) Compared with the energy released when a pound of gasoline is burnt, the energy released when a pound of TNT is exploded is about:
(a) Two times greater
(b) 13 times greater
(c) the same, within 40%
(d) less by a factor of 15

9) A critical mass of plutonium has a volume of:
(a) 3 tablespoons
(b) 1 soft-drink can
(c) 1 gallon
(d) 3 gallons

10) In one computer cycle (a billionth of a second for a slow laptop), light travels about:
(a) 1 foot (30 centimetres)
(b) 300 metres
(c) 3 kilometres
(d) 300 kilometres

14) The ozone layer in the atmosphere is created by:
(a) Carbon dioxide
(b) Sunlight
(c) Sulphur from fossil fuels
(d) Chlorofluorocarbon compounds (such as Freon)

15) Light in a fibre carries more information per second than electricity in a wire because:
(a) It has a higher frequency
(b) It travels faster than electricity
(c) It makes use of quantum effects
(d) It doesn't. Wires transmit higher bit rates. (That's why they are used in computers.)

16) The power in a square kilometre of sunlight is:
(a) 1 kilowatt
(b) 1 megawatt
(c) 10 megawatts
(d) 1 gigawatt

17) To be legal for consumption in the United States, the radioactivity of one litre of ethanol (drinking alcohol) must be:
(a) Less than 12 decays per minute
(b) Below the threshold of standard Geiger counters
(c) Not measurable by accelerator mass spectrometry (the most sensitive detection method)
(d) More than 4,000 decays per minute

I thought that the last question was the most interesting, and not just for the alternative spelling of liter. The answers can be found at the foot of the references.

1. Richard A. Muller, "Climate politics: What every president should know," Nature, vol. 450, no. 7168 (November 15, 2007), p. 345
2. Richard Muller teaches the class, "Physics for future presidents," at the University of California, Berkeley. Details can be found here

Answers: 1-d, 3-d, 4-b, 6-a, 7-d, 9-b, 10-a, 14-b, 15-a, 16-d, 17-d

November 28, 2007

Natural Materials

Early in my career, I worked with a low temperature physics group doing studies on the specific heat and superconductivity of materials. An important commodity was the liquid helium required to cool and maintain experiments just a few degrees above absolute zero. Helium was first liquefied by Kamerlingh Onnes in 1908. Armed with this novel tool for attaining extremely low temperatures, Onnes went on to discover superconductivity. One of our typical experiments would consume nearly a thousand dollars of liquid helium, which was priced at about five dollars a liter in those days. We used so much of the stuff that we eventually purchased a helium refrigerator to liquefy the evaporated gas so it could be recycled. One interesting part of the refrigerator was its gaskets - they were made from natural leather. In the high-tech world of the later twentieth century, the only suitable gasket material for this helium refrigerator was a natural substance.

Earlier this year, I needed to have an older root canal retreated. My dentist, who knows that I'm a materials scientist, tells me about the materials of his craft, which include flexible, high-strength, nickel-titanium alloy drill bits. My root canal was sealed by Gutta-percha. Gutta-percha, a polymer of isoprene (trans-1,4-polyisoprene), is refined from the sap of a tree of the same name that grows in southeast Asia. The polymer latex is bio-inert and resilient, properties that make it ideal for endodontic applications.

Jose d'Almeida, a Portuguese engineer, described the material, used by indigenous people for many applications, to the Royal Asiatic Society in London in 1843. Shortly after d'Almeida's report, William Montgomerie, a surgeon for the East India Company sent samples back to England, starting a gutta-percha craze. The material was used to fabricate many commodity articles, including golf balls, and its popularity made the fortunes of the Gutta Percha Company, the main importer.

Gutta-percha has a high dielectric strength, so it is a good electrical insulator. Telegraph wires used gutta-percha insulation since the mid-1800s, and it was used as the insulation of the first trans-Atlantic telegraph cable. Its bio-inert property prevented attack by marine organisms in this application. Three hundred tons of gutta-percha were used in the first trans-Atlantic telegraph cable.

1. Gutta-percha (PBS).

November 27, 2007

Plasma Antenna

A radio antenna is a transducer for conversion of an electromagnetic field into a current, and vice versa. Guglielmo Marconi, a radio pioneer who shared the 1909 Nobel Prize in Physics for his radio work, was the first to use the word antenna for this radio component. Marconi wasn't the first to use antennas (or, antennae, as Latin syntax requires), since the concept of a dipole was well known in physics at the time. In the early days of radio experimentation, these transducers were called aerials, since they were hung in the air; or terminals, since they didn't connect to any other component. Marconi's choice of antenna for some of his first experiments in 1895 was a wire strung alongside a tent pole, and a tent pole is called l'antenna centrale in Italian. This was shortened to l'antenna, and then to the universal antenna.

Just a few decades ago, antennas were used almost exclusively for reception of broadcast radio and television signals. A typical US home would likely have less than five antennas in all; one on the automobile parked in the garage, one on the rooftop for television reception, and perhaps a few others on portable radios. The only antenna outside of those for broadcast reception might be one for a radio-controlled garage door opener. Today, antennas are ubiquitous, including those contained in RFID tags in purchased goods, automobile remote entry fobs, wireless telephones, cellphones, direct-broadcast satellite television systems, satellite radio systems, and wireless LAN networks. A driving force behind today's wireless world is the trend towards higher frequencies where the antenna dimensions, which scale with the wavelength, can be small. An antenna at the 5.8 GHz ultrawideband wireless LAN frequency is only a few centimeters long. Such small dimensions have led to innovations in antenna architecture, one of which is the use of a plasma as an antenna [1]. One advantage here is that the antenna can change shape, or even vanish, through application or removal of electrical current.

Of course, antennas are made from electrical conductors, and a plasma is a conductor. Plasma conductivity in actual application is just barely suitable for an antenna. The impedance of free space is about 377 ohms, so the resistivity of an antenna material should allow the antenna to be fabricated at far less than this value for optimum efficiency. As a back-of-the-envelope value, a forty-watt fluorescent lamp contains a plasma with a resistance of about 340 ohms. This is quite a lot of power, and we only get a 340 ohm plasma. This hasn't deterred some companies from developing this concept [2-4], which may have military applications, and a few patents have been generated [5-8].

1. T. R. Anderson and I. Alexeff, "Stealthy, Versatile, and Jam Resistant Antennas made of Gas" (American Physical Society Press Release, November 12, 2007.
2. Stealth Antenna Made of Gas Impervious to Jamming (scientificblogging.com).
3. Plasmaantennas.com.
4. Markland Technology.
5. Elwood G. Norris, et al., "Gas tube RF antenna," US Patent No. 5,594,456 (January 14, 1997).
6. Theodore R. Anderson, et al., "Multiple tube plasma antenna," US Patent No. 5,963,169 (October 5, 1999).
7. Jeffrey Hunter Harris, et al., "Plasma antenna." US Patent No. 6,492,951 (December 10, 2002).
8. Theodore Anderson, et al., "Antenna having reconfigurable length," US Patent No. 6,710,746 (March 23, 2004).

November 26, 2007

Worth (Twice Its) Weight in Gold

A recent article in Nature [1] highlights the utility of the elements platinum, palladium and rhodium as catalysts, especially for use in automotive catalytic converters. Many transition elements, such as nickel and gold, are catalysts for some reactions at lower temperatures, but only platinum, palladium and rhodium function at the higher temperatures (about 900oC) required in the automotive application. The essential problem in catalysis, of course, is to have a catalyst participate in a chemical reaction without being changed itself. Another problem with platinum is its success. The automobile is becoming popular in the developing world, everyone wants platinum, or one of its close cousins, and the price is going through the roof. The price of platinum has been running at about $1,450 per troy ounce, while gold hovers at around $800 per troy ounce [2].

Global supplies of platinum are nearly eight million troy ounces, two thirds of which is used for automotive catalysts. One moderating factor is that platinum can be recovered from automotive catalysts, and the resale price of your used car has a bit of platinum price built into it. In applications requiring rhodium, there's no such thing as an easy substitution, and the price of rhodium has peaked over the years at nearly ten times the price of gold. My laboratory once had two crystal-growing furnaces wound with platinum-rhodium wire at a cost of about $20,000 per furnace.

The automotive catalytic converter is a mature technology with no peer. I remember sitting in on a conference paper on one of the first designs for an automotive catalytic converter when I was a graduate student, which certainly qualifies it to be a mature technology. The automotive catalytic converter is a three-way converter that participates in three reactions at the same time:

• Reduction of nitrogen oxides 2NOx -> xO2 + N2
• Oxidation of carbon monoxide 2CO + O2 -> 2CO2
• Oxidation of unburned hydrocarbons 2CxHy + (2x+y/2)O2 -> 2xCO2 + yH2O

Since we're stuck with this technology, the only way around the high cost of the platinum group metals is to use less, or somehow keep their particular properties while diluting them with cheaper materials. Nanotechnology has reportedly been used by Nissan and Mazda to reduce catalyst content considerably. Fortunately, palladium is less expensive than platinum at present, so automakers have been using more palladium. This has moderated the price of platinum to some extent. Palladium was used in early catalytic converters, but there was a move to platinum when palladium became more expensive. This market oscillation in price ratio between platinum and palladium, which had a period of about a decade in the past, may settle into a period of just a few years in the future [3].

1. Jeff Tollefson, "Worth its weight in platinum," Nature, vol. 450, no. 7168 (November 15, 2007), pp. 334-335.
2. Platinum Market Price (Johnson-Matthey).
3. For those readers not familiar with the unit, a troy ounce is
exactly 31.1034768 g, which is larger than the "regular" (avoirdupois) ounce (28.349523125 g).

November 19, 2007

Thanksgiving Holiday and Vacation

The Morristown site is closed on Thursday (11/22) and Friday (11/23) in observance of the Thanksgiving holiday. We're in good company, since more than three-quarters of the US working population has the Friday after Thanksgiving as a paid holiday. [1] The Friday after Thanksgiving has its own name, Black Friday. This name alludes to the heavy automobile traffic and the stress of having too many family members in the same room at the same time. Many employees choose to take three vacation days in that week, and thereby have the entire week off from work. My experience is that household chores during this holiday entail more work than spending those days in my laboratory or office. However, nobody takes my advice, not even me, so I'll be away the entire week.

Thanksgiving has been observed in the US since the Pilgrims celebrated their first harvest in 1621. The fourth Thursday of November was designated as a day of national thanksgiving by US President Franklin D. Roosevelt in 1939. The US Congress thought it was a good idea, and established it by act of law in 1941.

1. Thanksgiving on Wikipedia.

November 16, 2007

Big Blue's Blue Cloud

The adjective, "blue," is associated with IBM. IBM is known in the computer trade as "Big Blue," and there's no wonder about "big" - IBM has more than 350,000 employees worldwide. The association with blue likely comes from the color of its logo, which was also the color of its early mainframe computers. Many things produced by IBM are coded "blue," one example of which is Blue Gene, a petaFLOPS-scale supercomputer, not to be confused with "Blue Jean," a song by David Bowie. IBM has now launched "Blue Cloud," its model of internet computing which will allow desktop and mobile computers to access supercomputing resources remotely, including those derived from cluster computers. [1-4] This networked vision of computing, called "cloud computing," is thought to be the next big thing in computing. IBM has thought enough about this idea to assign a 200-person team to its Blue Cloud initiative. [4]

Like many of IBM's recent software offerings, Blue Cloud will be open source software; that is, software for which the source code is freely available for users to modify for their own purposes. IBM will make its money on hardware by supplying servers compatible with its cloud computing model. This strategy is much like IBM's growing support of Linux, a free and open source operating system. I've been using various flavors of Linux (Red Hat, Gentoo, and now Ubuntu) at home and in the laboratory since the turn of the new millennium. Linux is becoming a viable alternative to Windows, and one Linux feature is that there is no Blue Screen of Death. Large corporations are drowning in data, and they now face essentially the same problems as a large internet companies, such as Google. Furthermore, corporations are adapting many internet applications as tools, among which are search, blogs and wikis. One advantage of the "cloud" concept is the ability to reallocate tasks when a server fails. IBM will introduce its first Blue Cloud products in the second quarter of 2008.

1. IBM unveils new data center technology (Associated Press, via Business Week, November 15, 2007).
2. Martin LaMonica, "IBM floats Blue Cloud computing plan" (News.com, November 15, 2007).
3. James Niccolai, "IBM Turning Data Centers Into 'Computing Cloud'" (IDG News Service, via PCWorld.com, November 15, 2007).
4. Steve Lohn, "I.B.M. to Push 'Cloud Computing,' Using Data From Afar" (New York Times, November 15, 2007).

November 15, 2007

Lévy Flight

When I was in fifth grade (I was about ten years old), I was given an IQ test. Of course, this was long before testing of this sort was done by computer, so it was a pen and paper test. It was multiple-choice, but there were several things a student needed to write. One was his name, and another was his date of birth. Now, a young child will always know his age, and he will also remember the month and date of his birthday, but he might have trouble with the year. At least, I did. I was born in 1947, but I wrote 1949. At that age, seven and nine had a certain similitude to me, so it's surprising that I ever did well in math. If the testers didn't check the date, this would have elevated my IQ by about 25 points. By accident, I was testing the intelligence of the IQ testers.

Over the years, I did a lot of reading on intelligence testing. One of the big debates is the Nature-Nurture debate; that is, how much of intelligence is genetic, and how much is a function of upbringing and environment. This is important to intelligence testing, since you first need to ask yourself what you actually want to test and how the questions should be worded to test whatever it is that you want to test. One example of how nurture can cloud the results is illustrated by the following story. A country boy was given an intelligence test in which there was a simple problem. There was a blank square on a sheet of paper, and this square represented a field with a lost ball. The student was asked to trace the path he would use to search for the ball in that field. What the testers wanted to see was a systematic back and forth search pattern covering the blank square. Well, the country boy couldn't understand why you can't see a ball on a perfectly blank field, so he reasoned it must be behind something. So he drew some tree stumps, rocks, etc., and he traced a path in which he looked behind each of these objects.

Is a systematic search, the one the IQ testers were looking for, really the best type of search? It may be, when you search for just a single object, but what about foraging for food? There may be many possible edibles on the landscape, and you want to locate just one, but in the shortest time possible. One possible search technique is a Lévy flight, named after the French mathematician, Paul Pierre Lévy. Mathematicians and electrical engineers will realize his prestigious academic pedigree when I mention that he was a student of Jacques Hadamard. A Lévy flight is a random walk with a non-uniform distribution of path lengths. A Lévy search is this - There is a long movement to a random area, and then smaller search movements at that area. After a number of shorter search paths, there's a subsequent large movement to another area, etc. In a heavy-tailed distribution of this sort, the probability of a movement by an amount x takes the form

P ~ |x|-(1 + α)

where α can range between zero and two. This gives an infinite variance.

There was much excitement about ten years ago when a study was published showing that Lévy flight described the foraging patterns of albatrosses [1]. This possibility had been previously described as likely for ants (1986) and fruit flies (1995) [1]. This initial paper encouraged subsequent studies on bumblebees, deer, microzooplankton, grey seals, spider monkeys and fishermen, but a recent paper in Nature presents evidence that all such results are spurious [2-3]. An international team of scientists collected high resolution positioning data for albatross flights, and their analysis showed a gamma distribution, which is an exponentially decaying probability distribution. Based on this knowledge, they extended their analysis to the data sets for the original deer and bumblebee studies and found no evidence of Lévy flights. Of course, the studies for the other animals are suspected to be flawed, as well, thus illustrating the self-correcting nature of science.

I seem to have lost my USB memory stick. Excuse me while I look in back of this pile of books; or maybe over there.

1. G. M. Viswanathan, V. Afanasyev, S. V. Buldyrev, E. J. Murph, P. A. Prince and H. E. Stanley, "Lévy flight search patterns of wandering albatrosses," Nature, vol. 381 (30 May 1996), pp. 413-415 .
2. 'Levy Flight' Theory Of Food Foraging In Albatross Overturned, Say Researchers (News Account, October 24, 2007).
3. Andrew M. Edwards, Richard A. Phillips, Nicholas W. Watkins, Mervyn P. Freeman, Eugene J. Murphy, Vsevolod Afanasyev, Sergey V. Buldyrev, M. G. E. da Luz, E. P. Raposo, H. Eugene Stanley and Gandhimohan M. Viswanathan, "Revisiting Lévy flight search patterns of wandering albatrosses, bumblebees and deer," Nature, vol. 449 (October 25, 2007), pp. 1044-1048.

November 14, 2007

A Picture is Worth...

Password authentication is ubiquitous in our computer age. Gone are the days when computers came with a physical key, and a user could be reasonably certain that no one could access his files without that key. As computers became networked, physical access was no longer required to break into a computer. Hackers were helped also by the insecure Microsoft Windows operating system, which has been roundly criticized by many security professionals. Hackers in developing countries found that good money could be had through theft and sale of credit card numbers and other personal data, and by creation of "botnets." Botnets are computers infected with virus programs that allow hackers to launch denial-of-service attacks against internet companies in order to extort "protection" money. Password access, especially on network servers, can prevent some of these problems, but not all. Still, passwords are here to stay, since they're a simple means of user verification, and they are the first line of defense against unauthorized access to electronic systems. Of course, we're all warned against using simple passwords, such as "password," and to change our passwords often, but we soon reach a point at which even we can't access our own computers because we can't remember all these passwords!

Portable computer devices, such as cell phones and Blackberries, have the additional problem that it's a pain (in the finger) to type secure, longer passwords. One alternative authentication scheme is to present the user with a series of screens full of random images, some of which have been pare-selected by the users. The user selects the familiar images with his pointing device (e.g., mouse or stylus). For each screen with sixteen images, selecting two of the sixteen will yield about one character of a password. If it's possible for the user to select a different corner of each images, then each screen will yield about two password characters, and after a few screens we have the equivalent of a lengthy password. Another picture authentication scheme has just been published by computer scientists at Newcastle University. In this scheme, the user draws a familiar image, and the way he draws it becomes the password. Among the parameters used in this process are stroke location and stroke direction. In effect, this is similar to verification of a handwritten signature, but with additional information about how the signature was written. [1]

Jeff Yan, a lecturer at Newcastle University, and Ph.D. candidate Paul Dunphy have created what they call "Draw a Secret" (DAS) technology, and an enhanced version of DAS called "Background DAS" (BDAS). In DAS, the user draws his chosen image on a grid that's used as a drawing aid. In BDAS, the image is drawn on a background image. One example of BDAS is drawing an object, such as a stapler, on an image of a desktop. In each case, the strokes used for drawing are analyzed to authenticate the user.

To authenticate their authentication scheme, the Newcastle scientists had people create a password drawing on a selection of five images - a star field, a map, a playing card, a crowd and a flower. A week later, they had these people attempt to reproduce their image password, and 95% were able to do this in three attempts. This is not a very exciting result, but practice makes perfect, so there's room for improvement in both the computer code and the users' drawing abilities. The Dunphy-Yan paper "Do Background Images Improve 'Draw a Secret' Graphical Passwords?" was presented on October 30, 2007, at the Association for Computing Machinery Conference on Computer and Communications Security in Washington. Their work was supported by a grant from Microsoft.

1. "Scientists draw on new technology to improve password protection" (Newcastle Press Release, October 24, 2007).

November 13, 2007

Three Hundred Articles

This is the three hundredth posting on this blog, which I started on August 10, 2006.

Since the digits in pi appears to be random [1-2], we would find the sequence "300" about every thousand digits. Ignoring the initial 3, here are the positions of the first 25 occurrences of 300 in pi [3]:

• 972 • 1,358 • 5,167 • 6,235 • 6,685 • 6,972 • 10,128 • 10,374 • 10,631 • 11,339 • 11,954 • 12,307 • 13,661 • 14,583 • 15,255 • 15,692 • 16,586 • 16,702 • 19,002 • 19,486 • 20,206 • 20,735 • 21,257 • 21,377 • 22,398

The average interval for these first twenty-five occurrences is 896, which is not bad for such a limited sample. Remember that the standard error of the mean for a normal distribution is related to the reciprocal of the square-root of the sample size, so the best we can expect is an 80% agreement. We can look also for the sequence "300300" in pi and find it at these positions:

• 277,922 • 462,701 • 1,122,472 • 1,813,838 • 2,710,988 • 3,367,041 • 5,185,639 • 6,766,409 • 8,501,388 • 12,026,638

The average interval for these first ten occurrences is 1,202,664. Our expected average interval is 1,000,000. We're very close, considering the very small sample size.

300 (2006, Zack Snyder, Director) is a film about the Battle of Thermopylae, in which a small troop of Spartan soldiers confronted the entire Persian army of about 100,000 in 480 B.C. This was actually a remake of the earlier film, The 300 Spartans (1962, Rudolph Maté, Director). I didn't see either of these, since I spend most of my free time writing this blog (LOL).

The chemists among us have an interesting occurrence of 300 in the fullerene C300. [4-5] Perhaps we should name it the "Thermopylae molecule."

1. Pi seems a good random number generator - but not always the best (Purdue Press Release, April 26, 2005).
2. Shu-Ju Tu and Ephraim Fischbach, "A study on the randomness of the digits of π," International Journal of Modern Physics C, vol. 16, no. 2 (February 2005), pp. 281-294.
3. Pi Sequence Finder.
4. R. Pis Diez, M. P. Iñiguez, M. J. Stott, and J. A. Alonso, "Theoretical study of the collective electronic excitations in single- and multiple-shell fullerenes," Phys. Rev., vol. B 52, no. 11 (September, 1995), pp. 8446-8453.
5. R. Astala, M. Kaukonen, R. M. Nieminen, G. Jungnickel and Th. Frauenheim, "Simulations of diamond nucleation in carbon fullerene cores," Phys. Rev., vol. B 63, no. 8 (February, 2001), 081402 (2001).

November 12, 2007

Faster than a Speeding Bullet

Cosmic rays are energetic particles streaming through the universe at very high energy. They are mostly protons (90%), helium nuclei (alpha particles, 9%) and electrons (1%). The origin of these particles has been a standing mystery in astrophysics because of their very high energy, which is up to 1020 electron volts (eV). This is a much higher energy than that produced in terrestrial particle accelerators, about 1013 eV, so it's been conjectured that an understanding of what produces these particles will offer an important clue to the nature and structure of the universe. Cosmic rays at the extreme end of the energy spectrum were discovered in 1962, and their origin and possible mode of production have been a mystery since then.

A just-published study by a multi-national team presents evidence that the cosmic rays are caused by the massive black holes at the centers of some galaxies. [1-7] Galaxies characterized as having an "active galactic nucleus" are presumed to harbor these massive black holes. Our own Milky Way Galaxy has a black hole at its center of about three million solar masses, but this doesn't qualify as an active galactic nucleus. Using the largest detector of cosmic rays in the world, the Pierre Auger Cosmic Ray Observatory in Argentina, a team of scientists from seventeen countries analyzed the direction of origin of cosmic rays. They found that the highest energy cosmic ray sources are not distributed uniformly, but their location correlate with galactic centers. It's possible that all cosmic rays originate in galactic nuclei.

Detection of these highly energetic cosmic rays is complicated by several factors. The first difficulty is that only 1% of galaxies have active galactic nuclei, presumed to form by the collision and coalescence of galaxies. Also, energetic cosmic rays are extremely rare. The cosmic ray flux of these on the Earth is only about one particle per square kilometer per hundred years. One aid to detection is that cosmic rays are not detected directly. Instead, they are absorbed in the upper atmosphere and cause a shower of particles to impinge on the earth. This "air shower" covers a large area, so the detector array must be huge. We should be thankful for our protective atmospheric shield, since it converts a single proton or alpha particle with the energy of a pitched baseball to a gentle shower.

The Pierre Auger Observatory, named after the scientist who first observed cosmic ray air showers in 1938, consists of an array of 1,600 detectors spaced about a mile apart on 1,200 square miles of wasteland in Argentina, and a companion array of twenty-four optical telescopes to record the atmospheric fluorescence at the apex of the air shower. Construction of the observatory began in 1999, and data collection began in 2004. Since that time, almost a million air showers have been detected, but only 77 of these had energy greater than 4 x 1019 eV; that is, 40 exo-electron volts (EeV). The detector could resolve the direction of these events to within a few degrees. The directions of the twenty-seven events with highest energy (above 57 EeV) correlated well with the locations of nearby galaxies with active nuclei, but the mechanism of their creation is still unknown. The published article appears in the November 9, 2007, issue of Science. [8]

1. The Pierre Auger Collaboration Press Release, November 8, 2007.
2. Dennis Overbye, "Energetic Cosmic Rays May Start From Black Holes" (New York Times, November 9, 2007).
3. J. R. Minkel, "Fastball-Strength Cosmic Rays Traced to Black Holes" (Scientific American Online, November 08, 2007).
4. John Johnson Jr., "Scientists trace cosmic rays to black holes" (Los Angeles Times, November 9, 2007).
5. Hazel Muir, "Monster black holes power highest-energy cosmic rays" (New Scientist News Service, November 8, 2007).
6. Katharine Sanderson, "High-energy cosmic rays traced to source" (Nature Online, November 9, 2007).
7. Lucy Sherriff, "Black holes blamed for super-charged cosmic rays" (The Register, November 9, 2007).
8. The Pierre Auger Collaboration, "Correlation of the Highest-Energy Cosmic Rays with Nearby Extragalactic Objects," Science, vol. 318. no. 5852 (9 November 2007), pp. 938-943.

November 08, 2007

A Not So Limiting Case

Recurring decimals are known to all elementary school students, and they don't need the New Math to know that 1/3 = 0.333..., with a three that repeats as long as you are willing to write it. A most interesting recurring decimal is 0.999.... It is a real number, not a rational number, like 1/3, and it has the unusual property that it is identical to 1. No, not so close to 1 that it may as well be 1. It can be proved that 0.999... and 1 are the same number.

The proof is quite simple

• Let a = 0.999...
• then 10a = 9.999...
• subtracting 10a - a = 9.999... - 0.999
• 9a = 9
• a = 1

Unlike the calculation in a past article where we proved that 1 = 2, this proof is correct. There are many other proofs [1], but this is the simplest.

1. Wikipedia article on 0.999...

November 07, 2007

FM Radio

The past several days have marked two anniversaries in frequency modulated (FM) radio. One of these was the fiftieth anniversary of WRPI, the student-operated FM radio station of Rensselaer Polytechnic Institute (RPI). It makes sense that RPI, founded in 1824 and one of the first engineering schools in the United States, would embrace radio technology. WRPI wasn't RPI's first radio station. Before FM radio was invented, RPI had an AM radio station, WHAZ, which began broadcasting in 1922. To illustrate the experimental nature of radio in its early days, WHAZ originally broadcast on 790 kHz, but it switched to 1300 kHz when General Electric petitioned for exclusive use of that frequency for its flagship radio station, WGY. WHAZ eventually migrated to 1330 kHz in a major frequency reallocation in 1941. This frequency reallocation is also the reason why there is no television "channel 1" in the US. RPI sold WHAZ in 1967, and it operates now as a commercial radio station. I was an operator at WHAZ during the brief time I was a student at RPI. Interestingly, I worked one summer for the General Electric television station, WRGB, down the hall from the WGY studios.

WRPI began broadcasting on November 1, 1957, when there were very few stations in the FM Radio Band. When it began operation, it broadcast with an effective radiated power (ERP) of about 800 watts using a circular dipole antenna mounted on a wooden pole attached to the roof of a university building. I was an operator at WRPI, and I was involved in the planning for a new antenna and transmitter to bring the ERP to 10,000 watts. This included my taking aerial photographs of the intended antenna site. This was an interesting experience, since the pilot would bank the small aircraft sharply, so I could get a better view, and it felt like I was falling out of the plane! Two other students parlayed their experience in building this new radio station into a business.

FM radio was invented by Edwin H. Armstrong (1890-1954) in the 1930s. On November 6, 1935, Armstrong presented his research on frequency modulation radio transmission and detection at a meeting of the Institute of Radio Engineers in New York City. Armstrong made his mark in radio history much earlier than that, since he invented the superheterodyne receiver in 1918. In a superheterodyne receiver, high frequency radio signals are converted into lower frequencies for easier amplification and signal processing. The superheterodyne principle is incorporated into nearly every receiver manufactured today, but it may be replaced eventually by software-defined radio designs.

Before Armstrong, frequency modulation was an unwanted side-effect in amplitude modulated transmitters. One advantage of FM is noise reduction, since most natural noise sources, such as lightning, are amplitude modulated and won't be detected. Another advantage is that the location of the FM broadcast band at high frequencies allows a greater modulation bandwidth and higher audio fidelity. Armstrong received the first Medal of Honor of the Institute of Radio Engineers, one of the founding societies of today's Institute of Electrical and Electronics Engineers, and he was listed as an inventor on forty-two US patents. However, most of his later life was rendered miserable by patent litigation, and he committed suicide on January 31, 1954.

1. Edwin Howard Armstrong (Wikipedia).
2. Edwin H. Armstrong, 1890-1954 (Excerpt from J. E. Brittain, "The Legacy of Edwin Howard Armstrong," Proceedings of the IEEE, vol. 79, no. 2 (February 1991).
3. Edwin H. Armstrong, "METHOD OF RECEIVING HIGH FREQUENCY OSCILLATIONS," US Patent No. 1,342,885 (June, 1920). The superheterodyne patent.
4. Edwin H. Armstrong, "RADIO SIGNALING SYSTEM," US Patent No. 1,941,066 (Dec 26, 1933). The FM patent.

November 06, 2007

Basic Building Blocks

The origin of life has always been the chicken-and-egg problem. Life processes require some fairly complex molecules, and these are found on Earth only because of life processes. How did enough elements come together to bootstrap this process, a problem formally called abiogenesis?

When I was in elementary school, I read about an experiment done by Stanley L. Miller, who was a graduate student of Harold Urey. Urey was a prominent physical chemist, most famous for his work on isotopes that won him a Nobel Prize in Chemistry in 1934. Later in life, Urey became interested in how planets evolved and the origin of life. The Miller-Urey experiment is so simple it could be conducted by a high school student with careful safety supervision. Based on the theories of the day that the primitive Earth had an atmosphere of hydrogen (H2), water (H2O), methane (CH4) and ammonia (NH3), Miller put all these into a reflux system with electrical sparks to simulate lightning. After a week, chemical analysis showed that a sizeable portion of the carbon was contained in organic compounds. This experiment had a tremendous impact on the origin of life question.

Recently, Paul von Ragué Schleyer and his research team in the Department of Chemistry at the University of Georgia have focused their efforts on the primitive synthesis of adenine. [1] Adenine is a simple purine and one of the essential constituents of DNA, and it forms a large portion of the adenosine triphosphate (ATP) molecule. ATP is a part of the mechanism by which chemical energy is transported within cells.

Adenine is known to form non-biologically. It's been detected spectroscopically in interstellar space, a frozen mixture of hydrogen cyanide in ammonia was found to contain adenine, and it's been detected under high temperature experiments to simulate volcanic environments. [2] Schleyer's team did molecular modeling on how adenine might be formed from the combination of five cyanide molecules. They investigated many possible mechanistic routes and found that catalyzed pathways involving H2O- or NH3- are the most favorable. All done with computers - No rubber gloves necessary!

1. Debjani Roy, Katayoun Najafian, and Paul von Ragué Schleyer, "Chemical evolution: The mechanism of the formation of adenine under prebiotic conditions," Proc. Natl. Acad. Sci., vol. 104, no. 44, pp. 17272-17277.
2. Kim Osborne, "How did chemical constituents essential to life arise on primitive Earth?" (University of Georgia Press Release, October 30, 2007).

November 05, 2007

The Play's the Thing!

Technology has been used in theater production from antiquity to the present. An early example is the Deus ex machina (απο μηχανης θεος), literally, "God from the machine." Some Greek plays, such as those of Euripides, would introduce a god, lowered from the sky on a crane, to resolve the plot from a hopeless situation. Today, technology's influence on theater can be see in stage lighting, electronic music synthesizers simulating most of an orchestra (much to the consternation of musicians), auditoriums designed using acoustic modeling, and wireless microphones that allow actors to be heard without shouting.

Technology is having another influence, since serious science and mathematics have become popular topics for theater plays. Since theater is a mirror of life, and life is now full of technology, it seems like an obvious trend. One of the first serious examples was "Copenhagen," a play by Michael Frayn [1]. The topic of this play (quantum physics) was so unusual that there was a symposium on it at the Graduate Center of the City University of New York in March, 2000. Among the speakers were Hans Bethe and John Wheeler. I happily attended this symposium, and Wheeler kindly autographed my copy of his book, "Geons, Black Holes, And Quantum Foam: A Life in Physics
." Of course, physics by itself will not generate much of an audience, so Frayn's play deals mostly with the personality conflict between Niels Bohr and Werner Heisenberg. As the story goes (subject to much redaction by Heisenberg in his later years), Heisenberg went to visit his mentor, Bohr, in 1941 to obtain Bohr's help in creating a German atomic bomb. Of course, old physicists do not a play make, so there's a substantial role for Bohr's wife, Margrethe. According to Margrethe, Heisenberg arrived wearing a German army uniform, but this may have been just a way to ensure easy transport. Heisenberg had suggested in his later years that he kept his research at a very basic science level to prevent the German military from getting a bomb. One take-away from the play (suitable for student book reports) is that the ambiguity of what actually happened during that visit mirrors quantum indeterminacy.

Perhaps inspired by the favorable public reaction to "Copenhagen," David Auburn wrote "Proof", a play that won the 2001 Tony Award for Best Play, and the 2001 Pulitzer Prize for Drama. The plot is about an important proof concerning properties of prime numbers (a proof of the Riemann Hypothesis?) found in the office of a recently-deceased mathematics professor, and the efforts of his daughter to prove his authorship. Not the usual stuff of plays, is it? Of course, there has to be a romance between the daughter and her father's former student who discovered the manuscript. It also didn't hurt that the film adaptation (2005, John Madden, Director) starred Anthony Hopkins as the father and Gwyneth Paltrow as the daughter.

Most recently, we have A Disappearing Number, a play by Simon McBurney, who also directs [2-4]. Part of the play revolves around the life of the extraordinarily gifted Indian mathematician, Srinivasa Ramanujan, and his Cambridge University mentor and prominent mathematician, Godfrey Harold Hardy. The plot tells a parallel tale of a present day mathematician who is studying Ramanujan's work. Surprisingly, this is a woman mathematician, but I'm talking statistics here, so don't go all Larry Summers on me! Of course, there's a love affair between the woman mathematician and an Indian-American businessman. Since Lit 101 demands the use of more than one of the usual thematic devices of irony, etc., our woman mathematician dies unexpectedly from a brain aneurism in a foreign country, since Ramanujan died at a very young age from tuberculosis in England. I haven't seen the play, so my assessment may be too harsh.

I must confess that my wife (a chemist) and I enjoy watching the television show, "Numb3rs," which currently airs Friday nights on the CBS television network. In "Numb3rs," there are at least three romantic affairs going on at the same time, and that excludes the love affair of men and their guns.

Whence the origin of the phrase, "The play's the thing," and what is the thing? The phrase comes from Shakespeare's Hamlet [5]

I'll have grounds
More relative than this - the play's the thing
Wherein I'll catch the conscience of the King.

The thing is Hamlet's device of writing some lines about regicide into a play to see his uncle's reaction. Hamlet was informed by a ghost that his uncle murdered his father to take his throne, but Hamlet needed a way to be sure.

1. James Glanz, "Of Physics, Friendship and the Atomic Bomb," New York Times (March 21, 2000).
2. Review: A Disappearing Number (New Scientist Online, September 24, 2007.
3. Michael Billington, "A Disappearing Number" (The Guardian (UK), September 12, 2007).
Louise Whiteley, "Mathematics: Variations on a Theorem," Science, vol. 318. no. 5847 (October 5, 2007), pp. 47-48.
5. Hamlet Act 2, scene 2, 603-605.