« July 2008 | Main | September 2008 »

August 28, 2008

An Homeric Eclipse

Our early ancestors marked the passage of years, not by a numerical reckoning, but by significant historical events. Thus, we are left to decide which of our numeric years coincides with "the fifteenth year of the reign of Tiberius Caesar," or "two hundred and thirty-nine years after the founding of the city of Rome." The latter standard of timekeeping was popular, since Rome dominated the world for many centuries. The expression, "ab urbe condita" ("from the founding of the city"), or simply a.u.c., is written often in ancient manuscripts. The founding of the city of Rome is conjectured to be April 21, 753 BC. This date is consistent with the necessary passage of years from the date of conception of its founders, Romulus and Remus, which was coincident with a solar eclipse on June 15, 763 BC. The inclusion of such astronomical facts in ancient texts allows us to establish dates with certainty.

Homer's Odyssey [1], a description of the adventures of Odysseus, the king of Ithaca, as he traveled home from the Trojan War, contains several astronomical references. One of these, a description of a solar eclipse in Book XX of the epic, allowed two astronomers to estimate the date of a described event, the slaughter of Penelope's suitors, as April 16, 1178 BC. This date is consistent with the established dates of the Trojan War (1192-1184 BC) [2-3].

This astronomical dating by Marcelo O. Magnasco of the Laboratory of Mathematical Physics at Rockefeller University (New York) and Constantino Baikouzis of the Observatorio Astronómico (La Plata, Argentina) was aided by the fact that total eclipses of the sun are infrequent. Since they are rare events, this is the likely reason why they are mentioned in ancient texts such as the Odyssey. The idea of dating the voyage of Odysseus this way is not a new idea. It was proposed in the 1920s, but there was no consensus between astronomers and historians as to which particular eclipse is described. Magnasco and Baikouzis consider not just the eclipse, but several other astronomical references. The constellation, Boötes, is mentioned, as well as the Pleiades star cluster (also known as the Seven Sisters), the planet, Venus, and the New Moon.

The New Moon reference is hardly helpful, since eclipses of the sun always happen at the time of the new moon. Venus is described as visible and high in the sky six days before Odysseus' vengeance. The Pleiades and Boötes are described as simultaneously visible at sunset, twenty-nine days before. Magnasco and Baikouzis speculate also that a trip by the god, Hermes, to Ogygia describes an astronomical event, since Hermes is identified with the planet, Mercury. Hermes' trip, thirty-three days prior to the event, is identified with the idea that Mercury was high in the sky at dawn at the western end of its trajectory. Hermes carried most of his messages west. An analysis of the ordering of these events with the eclipse for a span of years near the estimated date gave a firm date of April 16, 1178 BC.

Such dating is possible because of the regular solar system in which we live. In his analysis of ancient texts from many cultures, one classicist had the idea that our solar system was anything but regular. Immanuel Velikovsky (1895-1979), who help found the Hebrew University of Jerusalem, published Worlds in Collision in 1950. In this book, Mars and Venus step out of their orbits to nearly collide with Earth, and these close encounters are responsible for such ancient catastrophes as Noah's Flood.

1. Homer, "The Odyssey," translated by Samuel Butler (MIT Classics).
2. Joseph Bonner, "Celestial clues hint at eclipse in Homer's Odyssey" (Rockefeller University Press Release, June 23, 2008).
3. Constantino Baikouzis and Marcelo O. Magnasco, "Is an eclipse described in the Odyssey?" Proc. Natl. Acad. Sci., vol. 105, no. 26 (July 1, 2008), pp. 8823-8828. Available as a PDF file here.

August 27, 2008

You Are What You Read

My reading habits have changed considerably over the years. The biggest difference is that I seem to have less time to read. When I first started the practice of science, I would make frequent trips to the library to review the current journals and browse the newest additions to the book collection. Now, I visit the library only to consult weighty reference tomes, such as the JANAF Thermochemical Tables [1], and I read only those print journals to which I have a personal subscription. Fortunately, I have personal print subscriptions to Nature and Science, among others, but much of my reading is online. The way I access scientific information is not unlike the way most scientists do. As a study by a sociologist at the University of Chicago shows, this has influenced the way we scientists cite the literature.

James A. Evans of the University of Chicago Sociology Department, whose other interest is the influence of industrial funding on research, has published an analysis of how the internet age has changed the reading and citation habits of scientists. This thorough study [2-4] used a database of 34 million articles. Contrary to expectations, the availability of huge online archives of articles from many scholarly journals has not broadened the citation base of scientists; instead, citations to previous work have been more narrowly focused on more recent articles from fewer journals.

Not surprisingly, each science has its own personality. Scientists in the life sciences tend to cite fewer articles, and social scientists tend to cite a greater proportion of newer articles than those in other disciplines. Evans argues that the past method of accessing the literature, standing in the library in front of a wall of bound journals, gave scientists the opportunity to form their own opinion of subject matter. Evans cites "poor indexing" as one of the more important aspects of print journals, since it interjected unexpected content into literature searches.

Online search has relegated older papers to an electronic backwater. For example, how many of us have gone beyond the first few pages of links in a Google search? Hyperlinks in online articles tend to focus thought on prevailing opinion, which accelerates consensus and discourages original thought. Says Evans [4], "Online access facilitates a convergence on what science is picked up and built upon in subsequent research... It's like new movies. If movies don't get watched the first weekend, they're dropped silently." His work was supported by the National Science Foundation.

1. The NIST-JANAF Thermochemical Tables, Fourth Edition, Monograph No. 9 (Part I and Part II, M.W. Chase, Jr., Editor) is available from the American Institute of Physics, 2 Huntington Quadrangle, Melville, NY 11747-4502.
2. James A. Evans, "Electronic Publication and the Narrowing of Science and Scholarship," Science, vol. 321. no. 5887 (July18, 2008), pp. 395-399.
3. Jennifer Couzin, "Survey Finds Citations Growing Narrower as Journals Move Online,"Science, vol. 321. no. 5887 (July18, 2008), p. 329.
4. Dana W. Cruikshank, "Research Publications Online: Too Much of A Good Thing?" (NSF Press Release, July 17, 2008).

August 26, 2008

Not in My Neighborhood

When I was a child, there was a popular joke about a man searching for something under a lamp post in the middle of the night. A passerby asked him what he was doing. "I'm looking for my car keys," the man said. "Let me help. Where did you drop them?" The man pointed towards a parked car. The passerby asked, incredulously, "Why are you looking here?" "The light's better." This joke came to mind when I read a review of an article about research on bee foraging and implications of this research on forensic investigations [1].

How do bees forage? Bees never feed near their hive. Instead, they create a buffer zone around their hive. This reduces the chance of a parasite, or predator, locating their nest. In analyzing this pattern, a team of scientists from the School of Biological and Chemical Sciences, University of London (London, UK), and the Department of Criminal Justice, Texas State University-San Marcos (San Marcos, Texas) found that it was similar to the way criminals stalk their victims. As if lifted from a plot of the television series, Numb3rs, the team is populating their bee models with information from criminal investigations. It's not often that a scientific paper references the Yorkshire Ripper.

The foraging model is mathematically quite simple [2]. Foraging is very sparse near the hive, increases up to a certain distance, and then falls off as distance from the hive become excessive. The foraging probability, when plotted against distance along radial lines, has a sharp point at a specific distance. Geographic profiling of this sort is being applied also to the distribution of poaching snares in Zimbabwe.

As an example that the miniaturization of electronics is reaching extreme levels, the University of London team is gluing RFID tags onto bees to track their exit from the hive and their return.

1. Jennifer Carpenter, "Bees join hunt for serial killers" (BBC News, July 30, 2008).
2. Nigel E. Raine, D. Kim Rossmo and Steven C. Le Comber, "Geographic profiling applied to testing models of bumble-bee foraging," Journal of The Royal Society Interface (DOI: 10.1098/rsif.2008.0242, July 3, 2008).

August 25, 2008

Happy Birthday, RAND

Computer scientists, when they see the letters, rand, expect to see a closed set of parentheses after them, as in rand(). That's because rand() is the pseudo-random number generating function in the C programming language. Many other scientists will see all capital letters. The RAND Corporation, usually referred to as just RAND, is celebrating the sixtieth anniversary of its founding [1].

RAND, which is a contraction of Research and Development, began as an Army Air Force project, Project RAND, in 1946, but it wasn't established as an independent, nonprofit corporation until 1948. RAND was the first think tank; that is, an institute whose product is just ideas and analysis. About 95% of RAND's research is unclassified, and many of its policy papers are available at no charge from its website. RAND intends to put its entire library of unclassified papers online. One interesting selection already available is UFOs: What To Do? (George Kocher, 1968).

One interesting aspect of RAND's history is that its Santa Monica, California, headquarters building was designed by John Williams, head of its Mathematics Division [2]. Williams had the idea that the building should be designed to maximize the probability of chance meetings between RAND's personnel, thereby encouraging interdisciplinary communications. Architects of industrial and academic R&D laboratories have been pushing this principle for years, as evidenced in the Laboratory of the Year competition by R&D Magazine. Since a true thinker doesn't turn his brain off at the end of the work day, a RAND employee invented windsurfing during his off hours [3].

As part of its sixtieth anniversary celebration, RAND asked its staff to propose essays on current policy issues they believe are being ignored. These were distilled down to the following eleven Emerging Challenges [4]:

• The Aging Couple
• Corporate America's Next Big Scandal
• Innovative Infrastructure
• The Day After: When Electronic Voting Machines Fail
• Reality Check for Defense Spending
• A New Anti-American Coalition
• The Future of Diplomacy: Real Time or Real Estate?
• Corporate Counterinsurgency
• Beating the Germ Insurgency
• A Second Reproductive Revolution
• From Nation-State to Nexus-State

The last topic, by David Ronfeldt and Danielle Varda, two RAND political scientists, looks beyond security cameras to the myriad other sensors being deployed worldwide [5]. The internet has allowed immediate access to real time conditions anywhere in the world, including physical conditions such as forest fires, epidemics, animal migration, air and water pollution, and energy consumption; and social conditions, such as human-rights violations. These data are not just analyzed by governments, but by businesses and non-governmental organizations. As a consequence, Ronfeldt and Varda see the emergence of a nexus-state that will take over some functions of the nation-state.

The word, nexus is not heard often. Nexus, written by Tim Berners-Lee, was the first web browser.

1. Frequently Asked Questions on the RAND Web Site.
2. J.D. Williams, "Comments on RAND Building Program" (December 26, 1950).
3. James R. Drake, "Wind Surfing - A New Concept in Sailing (1969).
4. Eleven Emerging Challenges on the RAND Web Site.
5. David Ronfeldt and Danielle M. Varda, "From Nation-State to Nexus-State" (2008).

August 21, 2008

Nonexponential Decay

Exponentials seem to rule the world. Chemists have their Arrhenius plots of reaction rates, and physicists find exponential decay nearly everywhere. Because the sine and cosine functions can be expressed in terms of exponentials, nearly every scientist and engineer has encountered the exponential function in his work. The prevalence of the exponential is easy to explain. If a population of objects is growing or decaying at a certain rate, then the number at time (t+1) depends on the number existing at time (t). An example of this is radioactive decay, where the half-life specifies the time when half of the radioactive starting material has transformed into something else. Nothing has been as fundamental to nuclear physics as the exponential decay of radioactive elements; that is, until (possibly) now [1-4].

A group of physicists lead by Yuri Litvinov at the Gesellschaft für Schwerionenforschung, Darmalstadt, Germany, has discovered a mysterious oscillation in the radioactive decay of the promethium isotopes, 140Pm and 142Pm, contained as ions in a storage ring. These oscillations are very easy to see [3], since their amplitude is about 20% of the baseline exponential curve, and they have a period of about seven seconds.

In the early days of nuclear physics, Eugene Wigner and Victor Weisskopf conjectured from quantum principles that such deviations from exponential behavior might occur for short and long decays. Litvinov's group has a better explanation. They believe that they are seeing the effect of neutrino oscillation [5], a process conjectured to explain the solar neutrino problem. The solar neutrino problem, that the number of neutrinos emanating from the sun is just a third of the number predicted by accepted models, has been around for nearly fifty years. The current best explanation is that neutrinos oscillate from one type (electron, muon or tau) to another over the course of time. The time scales seen in the Darmalstadt experiment are consistent with this interpretation.

Of course, this result becomes believable only when confirmed by other experiments. A team from Lawrence Berkeley National Laboratory tried a similar, though not exact, experiment, and they saw no oscillation [6]. The Darmalstadt team is very certain of its data, since they took great care to investigate possible sources of error. However, no one is willing to rewrite the textbooks until an exact replay of their experiment is done.

1. Bertram Schwarzschild, "Physics Update: Nonexponential Nuclear Decay," Physics Today, vol. 61, no. 8 (August 2008), p.24.
2. Philip M. Walker, "Nuclear physics: A neutrino's wobble?", Nature, vol. 453, no. 7197 (June 12, 2008), pp. 864-865.
3. Graph of oscillating decay data for 142Pm.
4. Yu.A. Litvinova, et al.,"Observation of non-exponential orbital electron capture decays of hydrogen-like 140Pr and 142Pm ions," Physics Letters B, vol. 664, no. 3 (June 19, 2008), pp. 162-168; doi:10.1016/j.physletb.2008.04.062.
5. Harry J. Lipkin, "The GSI method for studying neutrino mass differences - For Pedestrians" (arXiv Preprint, Jun 1, 2008)
6. P. A. Vetter, R. M. Clark, J. Dvorak, S. J. Freedman, K. E. Gregorich, H. B. Jeppesen, D. Mittelberger, and M. Wiedeking, "Search for Oscillation of the Electron-Capture Decay Probability of 142Pm" (arXiv Preprint, July 3, 2008).

August 20, 2008

Peter Kapitza

Peter (Pyotr) Kapitza (1894-1984) was an important twentieth century physicist who started his career working with Ernest Rutherford at the Cavendish Laboratory of Cambridge University. There's a story told, perhaps apocryphal, about how Kapitza came to work with Rutherford at the Cavendish Laboratory. Kapitza arrived, unannounced, at the Laboratory and asked Rutherford if he could work there. Rutherford said there were no openings. Kapitza asked how many scientists were on Rutherford's staff, and Rutherford answered that there were about thirty. Kapitza then asked what the usual precision of measurement was in his experiments, and Rutherford said it was about two or three percent. Kapitza replied that adding another staff member wouldn't be noticed, since this was within the usual margin of error for the laboratory.

While at the Cavendish, Kapitza undertook fundamental studies in several areas of physics. He investigated the properties of matter subjected to intense magnetic fields. For these studies, he built solenoid electromagnets that he pulsed briefly with intense electric currents. These electromagnets produced fields of 320 kilogauss in a small volume [1]. Using such magnets, Kapitza showed that the magnetoresistance of metals was linear to high fields. Shortly thereafter, he turned his attention to low temperatures and advanced the field by developing an efficient method for production of liquid helium using adiabatic expansion.

Kapitza spent ten productive years at the Cavendish, but his sojourn there ended in 1934 when he was detained during a visit to his homeland, the Soviet Union. A scientist of his stature was a valuable commodity, so he was not permitted to leave, but he was given financial support to continue his research. The Soviet Union asked Rutherford to ship Kapitza's equipment to his new location. Rutherford didn't like the idea of one of his best scientists being kidnapped this way, but he realized that Kapitza needed his equipment, so he had an agreement drafted that he would ship Kapitza's "laboratory" to the Soviet Union at the Soviet's expense. Rutherford had the brick building disassembled, and he shipped the bricks along with the equipment [2]. This laboratory was the start of the Institute for Physical Problems.

While in the Soviet Union, Kapitza discovered the superfluidity of liquid helium, a consequence of quantum mechanics dominating fluid interactions at low temperature. He discovered also what's now called Kapitsa resistance, which is the resistance to heat flow at the interface of liquid helium and solids [3]. Kapitza shared the 1978 Nobel Prize in Physics for his low temperature research. His international importance can be inferred by his election to the US National Academy of Sciences in 1946.

Because of variations in the way Russian characters are translated into English characters, Kapitza is known also as Kapitsa. Google gives 290,000 page hits for Kapitza, and only 78,000 page hits for Kapitsa, so I've used Kapitza in this article.

1. Official Nobel Prize Biography of Pyotr Leonidovich Kapitsa.
2. Richard Reeves, "A Force of Nature: The Frontier Genius of Ernest Rutherford" (W. W. Norton, December 3, 2007).
3. P. L. Kapitza, J. Exptl. Theoret. Phys. (U.S.S.R.), vol. 11 (1941), p. 1f.

August 19, 2008

Near-Earth Objects

A shooting star is a pretty sight whether or not you realize that it's a sand-grain sized meteor that's burning up in the Earth's atmosphere. Of course, some meteors are larger than others, and we occasionally read about one reaching Earth's surface and striking an object. One of these was the 1992 Peekskill, New York, twelve kilogram meteor that struck a 1980 Chevy Malibu [1]. This meteor was so large that its fireball was sighted over a forty second period across the US before its landing in Peekskill. There are more spectacular meteor falls in recent history, such as the meteor that fell on June 30, 1908, at Tunguska in Russia, and other similar events. The Tunguska event had an estimated blast-equivalent of 10 megatons.

The typical meteor is the size of a grain of sand, and there are an average of 5-10 of these hitting the Earth each hour. During meteor showers, there can be hundreds an hour for a few hours. As every Six Sigma practitioner knows, if meteor size follows a normal distribution, a kilogram-sized meteor finds its way to earth about every year, and a Tunguska-sized meteor hits about every century, then you don't need to go many sigmas away from the mean on the time scale to get a real blockbuster. The biological evolution of the Earth was significantly changed 250 million years ago when 90% of all species were killed off by the impact of a huge meteor. The results of this mass extinction is found in the geological record at the Permian-Triassic boundary. A similar event at the Cretaceous-Tertiary was responsible for the extinction of the dinosaurs and the rise of mammals about 65 million years ago. Humans are lucky to have survived on Earth for so long. We are even luckier that now, with spaceflight becoming almost routine, we have the means to protect the Earth from such large meteors, but only if we see them coming.

The conventional technique used by astronomers to locate near-Earth objects is the blink comparator. Photographs of the same section of the night sky are taken a day, or several days, apart. The blink comparator projects these photographic images for an astronomer, repeatedly cycling between them. A near-Earth object is seen as a moving spot of light against a static background of fixed stars. Clyde Tombaugh discovered the planet Pluto in just this way. Of course, the astronomers' glass photographic plates have given way to digital photography, and newer image differencing techniques are now used. Image analysis software has become so advanced that this entire process can be automated, and near-Earth objects already known can be eliminated with reference to a database. With these computer tools in place, we now have the capability to catalog all substantial near-Earth objects and keep track of the larger and potentially dangerous ones.

The US has taken the lead in this effort. In 1991, the US Congress asked NASA to look into the detection of near Earth objects, as well as methods for their deflection away from an encounter with Earth. After several studies, NASA was authorized by Congress in 2005 to catalog at least 90% of all near-Earth objects greater than 140 meters in size by the year 2020. One tool in this effort is a collaboration of the US Air Force and the University of Hawaii called the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) [2-3]. This survey system will be composed of four telescopes equipped with 1.4-gigapixel cameras. This is a resolution hundreds of times greater than your home digital camera. If you think your family photographs take too much space on your hard drive, imagine the data-store needed for the Pan-STARRS system. Pan-STARRS will have 1.1 petabytes of storage (peta- is 1015, as compared with the more common giga- 109). The storage will be on fifty networked PC servers.

Perhaps the cinema has served a purpose in stimulating public interest in near-Earth objects. There have been three major motion pictures about a meteor threatening Earth:

Meteor (1979, Ronald Neame, Director)

Deep Impact (1998, Mimi Leder, Director)

Armageddon (1998, Michael Bay, Director)

The first is my favorite. It includes an all-star cast of Sean Connery, Natalie Wood, Karl Malden, Brian Keith, Martin Landau, Trevor Howard, Richard Dysart and Henry Fonda. The special effects are no match for today's productions, but the portrayal of scientists is somewhat realistic.

1. The Car, the Hole, and the Peekskill Meteorite (Astronomy Picture of the Day).
2. Pan-STARRS Project Home Page.
3. Eric Lai, "When the meteor and the 1PB database collide" (Computerworld, August 8, 2008).

August 18, 2008

First Solar

I've reviewed photovoltaics in several recent articles [1-2]. The principal advantage of photovoltaic energy sources is their zero carbon emission, and lack of pollution (aside from what may happen in the manufacture of the particular solar collector). There's plenty of solar energy available, since the solar constant, the integrated amount of solar energy incident on the Earth at all wavelengths, is a huge 1366 watts per square meter. The problem with conventional photovoltaics is that they are manufactured from single crystal silicon, which is expensive, or they are made from silicon or other materials in less expensive polycrystalline layers of low photovoltaic efficiency.

Silicon is not the best material for a solar cell, since it has an indirect band gap. Electrons in indirect bandgap semiconductors can change their energy state only if they change their momentum, also. This leads to inefficiency, but silicon is used since it's less expensive to make as a single crystal than other semiconductors. Compound semiconductors, such a gallium arsenide, are direct bandgap semiconductors, so they are more efficient photovoltaic materials. Cadmium telluride (CdTe) and cadmium sulfide (CdS) are other direct bandgap materials, and these materials have been chosen by First Solar, a photovoltaic system producer highlighted in the current issue of IEEE Spectrum magazine [3]. Cadmium telluride can be doped to produce a p-type semiconductor, and it can be combined with n-type cadmium sulfide to form a photovoltaic diode structure. First solar has brought the manufacturing cost of their photovoltaics down to $1.14/W, and they sell these for $2.45/W. So-called grid-parity, the breakeven cost for substituting photovoltaics for off-peak grid power in the US, is a sale price of a dollar a watt. Most photovoltaic panels are now sold at less than $5.00 per peak watt.

CdTe and CdS are not new materials. First Solar's innovation is a manufacturing process to produce 600 x 1200 centimeter glass panels of these materials. First Solar is selling its photovoltaic arrays as fast as it can make them, and it has a huge backlog of orders. It's no wonder that its stock price has climbed from $25 to $250 in less than two years. Its stock traded at $264.92 on Friday, August 15, 2008. First Solar had a 2007 revenue of $638 million which returned a net income of $105 million. First Solar had 723 employees in 2007. The global market for photovoltaics is expanding at a 50% annual growth rate, and First Solar has plans to produce a gigawatt's worth of its panels in 2009 to capture about 15% of the world market. Its development has been aided substantially by orders from Germany, where government subsidies for solar power have gained First Solar $6 billion in orders through 2012.

Public sources indicate that the First Solar process is similar to the following. Glass sheets with an indium tin oxide transparent conductive coating are first heated in vacuum at 600 oC. Cadmium sulfide is then deposited onto the glass in vacuum by evaporation at 700 oC, followed by cadmium telluride at about the same temperature. The panels are then cooled by a nitrogen glass stream at 300 oC. This process cools the surface faster than the interior of the glass, and this causes surface compression of the glass that acts as a strengthening mechanism. A subsequent low temperature anneal in the presence of chlorine gas improves the photovoltaic efficiency by about ten percent by an unknown process (likely by deactivating impurity energy levels in the gap). A reflective metal conductor is then applied to act as the second collection electrode. Photovoltaic energy conversion efficiency is about 10.5%.

Cadmium is a plentiful material, although perhaps not that environmentally-friendly, but tellurium supplies might impact the cost of these photovoltaics. I reviewed the skyrocketing cost of indium in a previous article (Indium, January 8, 2008). One competing materials technology is Copper indium gallium selenide (CIGS), which is more efficient [4], but it will suffer more from indium price escalation.

1. This Blog: Efficient Photovoltaics (July 21, 2008).
2. This Blog: Splitting Water MIT Style (August 5, 2008).
3. Richard Stevenson, "First Solar: Quest for the $1 Watt" (IEEE Spectrum, August, 2008).
4. Michael Powalla and Dieter Bonnet, "Thin-Film Solar Cells Based on the Polycrystalline Compound Semiconductors CIS and CdTe," Advances in OptoElectronics, vol. 2007 (18 July 2007), Article ID 97545.
5. First Solar on Wikipedia.

August 14, 2008

Infinite Series Expressions

One of the most fascinating parts of mathematics is the infinite series. The idea that sums of infinite elements, as specified by simple algorithms, can yield not only finite numbers, but mathematically important numbers, is the most powerful aspect of number theory.

The harmonic series has the easiest rule for construction; namely, add up all the reciprocals of the natural numbers

1 + (1/2) + (1/3) + (1/4) + (1/5) + (1/6) ...

Unfortunately, the harmonic series is divergent; that is, it grows larger as more terms of the series are summed, and it doesn't approach a limiting value. An easy way to spot the divergence is by grouping the terms, as follows:

1 + (1/2) + (1/3 +1/4) + (1/5 + 1/6 + 1/7 + 1/8) + (1/9 ...

It can be seen that each successive term is either 1/2, or larger than 1/2, so we're summing an infinite number of a rather large number. This grouping trick was discovered by Nicole Oresme in the fourteenth century.

There are numerous examples of divergent series, but convergent series are far more interesting. Some examples of convergent series are given below. The second example, called the alternating harmonic series, is the convergent variant of the harmonic series.

1 + (1/2) + (1/4) + (1/8) + (1/16) + (1/32) ... = 2

1 - (1/2) + (1/3) - (1/4) + (1/5) - (1/6) ... = ln(2)

1 - (1/3) + (1/5) - (1/7) + (1/9) - (1/11) ... = π/4

1 + (1/4) + (1/9) + (1/16) + (1/25) + (1/36) ... = π2/6

This relationship of the mathematical constant, pi, to just the odd numbers in one case is quite perplexing, as is the relationship of pi-squared to the squares of the natural numbers in the other case. Although pi can be computed from these series, they converge rather slowly, so it takes a long time to get a good value for pi. Another series, developed by the brothers, David Chudnovsky and Gregory Chudnovsky, delivers three digits of pi per calculated term, but it's too complex to include in this article.

The trigonometric functions can be calculated using series expressions, as follows:

sin(x) = x - (x3/3!) + (x5/5!) - (x7/7!) ...

cos(x) = 1 - (x2/2!) + (x4/4!) - (x6/6!) ...

tan(x) = x + (x3/3) + (2x5/15) - (17x7/315) ...

The last series is only valid for positive x values less than π/2. The values of x, of course, are in radians. There is a method to the madness of the tangent expression, relating to Bernoulli numbers, but it's too complicated to recite here. It all goes to prove that several hundreds years of thought by many scholarly mathematicians can lead to important, but sometimes complicated, results [1-2].

1. Series (Wolfram Math World).
2. On-Line Encyclopedia of Integer Sequences.

August 13, 2008

Quantum Weirdness

Some people would say that the phrase, "quantum weirdness," is an oxymoron. Richard Feynman, the Nobel Prize winning physicist, stated, "I think I can safely say that nobody understands quantum mechanics [1]." Feynman, who received the Nobel Prize for work in quantum mechanics, included himself in this assessment. Just when physicists were getting accustomed to the idea that quantum mechanics expressed probabilities, Einstein and his colleagues introduced what's now called the EPR Paradox, demonstrating what Einstein called "spooky action at a distance." Einstein would be surprised to find that his "spooky action at a distance" is now being used to technological advantage in quantum cryptography.

Over the years, there have been many skirmishes between physics and Information Theory. This is because information and entropy are directly related, as shown by Boltzmann's famous equation

S = k ln(Ω)

in which S is the entropy, k is the Boltzmann constant, and Ω is the number of possible states that a system can contain. The number of possible states of a system is essentially its information content. In 1975, Stephen Hawking and Jacob Bekenstein discovered a fundamental problem with black holes. They found that black holes irreversibly absorb information from the universe, so the universe, as "closed" a thermodynamic system as there can be, violates classical thermodynamics.

Graeme Smith of the IBM's Thomas J. Watson Research Center and Jon Yard of Los Alamos National Laboratory have just published a preprint [2-3] of a paper on the capacity of a quantum information channel. As communications engineers know, the signal-to-noise ratio of a transmission system, whether based on a piece of wire, radio channel, fiber optic cable, or other communications medium, determines how fast information can be sent through it, a quantity known as the channel capacity. A major advance in communications has been the development of error-correcting codes [4] that increase channel capacity beyond the usual technical limits to attain the theoretical channel capacity of a system [5]. In order for these error-correcting codes to work, there needs to be some information transfer in the channel. Smith and Yard's non-intuitive result is a calculation that says that combining two quantum information channels of zero capacity can yield a channel with a nonzero capacity. It's a lot like adding zero to zero and getting a one, but who thinks quantum mechanics makes sense?

1. Richard Feynman Quotations (Wikiquote).
2. The physics arXiv blog, "Quantum communication: when 0 + 0 is not equal to 0" (arXiv, August 3, 2008).
3. Graeme Smith and Jon Yard, "Quantum Communication With Zero-Capacity Channels" (arXiv Preprint, July 30, 2008). Available as a PDF file here.
4. T. Richardson and R. Urbanke, "The renaissance of Gallager's low-density parity-check codes," IEEE Communications
Magazine vol. 41, no. 8 (August, 2003), pp. 126-131
5. C. E. Shannon, "A Mathematical Theory of Communication," Bell Syst. Tech. J. vol. 27 (July and October, 1948), pp. 379-423 and 623-656.

August 12, 2008

Face Swapping Privacy

In the distant past, when photographs were still recorded chemically, on film, professional photographers would carry with them a pad of model release forms. These forms were a general waiver of any privacy rights the photographed subject may hold, or any claim for monetary compensation the photographed subject may assert for use of his image. Newsworthy photographs are generally exempt from a need for such a release, so photographers don't typically need a release for an image, for example, of a government official obtained during the performance of his official duties. The paparazzi have extended this principle to include celebrities.

The number of photographs has increased phenomenally with the advent of the digital era, since the cost of taking a photograph is essentially zero. As a result, the number of photographs published on the internet is huge. One popular photo-sharing website, Photobucket, has more than 5.7 billion images posted online. Many of these are family photographs, so no model release is expected to be required. Some of the images are graphics borrowed from other web sites. Many images, however, are images taken in public places, and, because of the high resolution of many digital photographs, some of the faces of the people in the images are readily identifiable.

Photography of people in public places without their consent is permitted in most circumstances under US law, but this is not true in other countries. This problem has come to the fore with the development of Google's Street View mapping web site that gives 360-degree viewable images of street scenes of major cities, complete with images of pedestrians. Before Google's launch of Street View in Europe, it plans to develop software to automatically blur people's faces and the license plate numbers of vehicles [1]. Of course, blurred faces may not distract from Street View's purpose, but this is aesthetically displeasing for other photographs. Can anything be done in today's litigious society to protect a photographer from liability while preserving the aesthetic quality of his work?

Computer image manipulation has addressed this problem in a program written by a team of computer scientists from Columbia University [2-4]. Their software identifies faces in images and decomposes them into their principal parts (eyes, nose, and mouth) and particular color tones. It then selects appropriate substitutes from a pool of stock photographs, which at this time is a database of 33,000 images from photo-sharing web sites. The software selects matches from the database according to pose and lighting conditions. Further computation matches skin tone, using a recoloring and relighting algorithm, and the program blends the parts into a single face. The authors validated their approach with a user panel, and there are some examples in their research paper [4].

This approach can be used for purposes other than obfuscation. An individual's digital photo collection may have many images of the same people, such as family members. A modified program can put a desired expression onto a face in a desired pose or in a photograph at a specific location. As I discussed in a previous article (In the Blink of an Eye, January 5, 2007), eye blinks are a problem in group photographs, but they're no problem with this software. A quick series of photographs is all that's needed to produce a single photograph with no eye blinks.

1. Robert McMillan, "Google working to make Street View anonymous" (InforWorld, November 30, 2007).
2. Swapping facial features protects online privacy (New Scientist Online, July 28, 2008).
3. Kevin Kelly, "Face Swapper Privacy" (uly 29, 2008).
4. Dmitri Bitouk, Neeraj Kumar, Samreen Dhillon, Peter Belhumeur, and Shree K. Nayar, "Face Swapping: Automatically Replacing Faces in Photographs" (2008 SIGGRAPH Conference, August 10-15, 2008, To Be Presented).

August 11, 2008

Core Beliefs

Nearly all of chemistry is based on the fundamental assumption that only a subset of an atom's electrons participate in chemical bonding and reactions. These electrons are the outer shell, or valence electrons. The rest of the electrons, called the core electrons, are necessary for maintaining the electrical charge neutrality of isolated atoms, but they don't participate in any chemistry. There are some notable exceptions. Xenon, a noble gas, has a fully populated outer shell of electrons. In theory, all of xenon's electrons are core electrons, but the existence of compounds of xenon, such as xenon tetrafluoride (XeF4), demonstrate that core electrons will participate in chemistry if the conditions are right. In the case of XeF4, it's the high electronegativity of fluorine that encourages the xenon electrons to react.

Of course, the chemistry we are accustomed to generally occurs near room temperature and pressure. Lithium is a metal at ambient temperature and pressure, but compressing lithium from 3 - 150 GPa pressure at 1000 K temperature results in a ten-fold decrease in its conductivity. Computer simulations of lithium by Isaac Tamblyn and Stanimir Bonev of Dalhousie University, and Jean-Yves Raty of the University of Liege, have explained this, and other unusual properties, of lithium under pressure. They've found that the core electrons are participating in lithium chemistry [1-2].

Using a supercomputer as their pressure vessel, they started with molten lithium under ambient conditions. As pressure was applied, tetrahedral bonds were found to form between lithium atoms in the liquid. Since lithium has only three electrons in toto, one valence and two core, we should expect only singly-bonded clusters of lithium - Li2. A tetrahedral coordination implies sp3 hybrid orbital bonding, leading to the idea that the lithium core electrons are involved, also. All this happens when lithium is so compressed that it occupies just two-thirds of the volume it has at room temperature and pressure. Tamblyn, Bonev and Raty's research will appear as a paper in an upcoming issue of Physical Review Letters. Experimental confirmation will be a long time in coming!

1. Davide Castelvecchi, "Chem 101" (Science News Online, August 1, 2008).
2. Isaac Tamblyn, Jean-Yves Raty and Stanimir A. Bonev, "Tetrahedral clustering in molten lithium under pressure" (arXiv Preprint, 19 May 2008).

August 07, 2008

Microsoft's Telescopic Display

Microsoft is known as a software company, but it's had some forays into hardware production, the most notable of which is the XBox. Microsoft also sells such computer peripherals as mice, joysticks, keyboards and game controllers. Hardware production is generally outsourced. Of course, displays are important to most computer products. Energy efficient displays are important for portable applications, and the ratio of mobile to desktop displays has increased substantially in just the past few years. All this has apparently piqued Microsoft's interest into funding research into a novel display. One of its employees, Anna Pyayt, did the research as part of her Ph.D. dissertation at the University of Washington, and she was aided in device development by two Microsoft engineers, Gary Starkweather and Michael Sinclair. The research was funded by Microsoft, which retained patent rights.

Most of the energy in a backlighted display is wasted. Liquid crystal displays transmit just a little more than 5% of the backlight, since their required polarizers can transmit only 50% of the light, the color filters transmit only 30% of the remainder, and there are losses from interface reflections. MEMS-based light shutters are a little better, transmitting about 10% of the backlight. The Microsoft display transmits about 35% of the backlight, so it's considerably more efficient. How can there still anything new under the sun (or in front of the backlight) in display technology, a field that's been worked-over for decades? One thing about optics is the breadth of the field and the diversity of optical elements and their possible arrangement. The Microsoft design uses a reflection principle by placing a miniature version of a Cassegrain reflector, a telescope well known to astronomers, at each pixel location.

There are two micromirrors at each pixel location. A stationary secondary mirror receives light from a hole in a deformable primary mirror. The primary mirror is a 100-micrometer diameter, 100 nanometer thickness, suspended circular metal membrane with a hole in the center. The primary mirror's shape is changed by an electrostatic field, and it either directs light back through its hole, after reflection by the secondary mirror, or outwards, around the secondary mirror. The prototype display had a contrast ratio of only 20:1, since the backlight was not collimated, but computer simulations indicate contrast ratios approaching 1000:1. One advantage of this optical system is that the pixels can be made to overlap, and a diffuser would allow a wide viewing angle. Other advantages are a 1.5 msec response time, and compatibility with the existing LCD production process.

1. Monica Heger, "Microsoft Engineers Invent Energy-Efficient LCD Competitor" (IEEE Spectrum, July, 2008).
2. Anna L. Pyayt, Gary K. Starkweather and Michael J. Sinclair, "A high-efficiency display based on a telescopic pixel design" (Nature Photonics, July 20, 2008).

August 06, 2008

Simon Stevin

The plant, stevia, is the basis of a non-sugar sweetener that's been used in many countries for many years. Extracts of stevia have a sweetness hundreds of times that of sucrose. Stevia's safety seems to be confirmed by the fact that millions of Japanese have used it for many decades with no adverse effects. However, the US Food and Drug Administration (FDA) banned stevia as an unsafe food additive in 1991, saying that sufficient toxicological data did not exist. Stevia has been in the news recently because of plans by PepsiCo to launch a line of stevia-sweetened drinks in Latin America, and a push from both PepsiCo and Coca-Cola for FDA approval of stevia sweeteners in the US [1-2]. One aid to industry acceptance of stevia is that it's unencumbered by patents. Although stevia is interesting in its own right, I mention stevia because this recent news reminded me of Simon Stevin (a.k.a., Simon Stevinus, c. 1548 - 1620), an important Flemish mathematician and scientist.

Not much is known about Stevin's life aside from his work. He was regarded enough to have a statue erected in his memory in Simon Stevin Square in Bruges, the capital of West Flanders, Belgium. Stevin was most noted in his lifetime by his invention around 1600 of the Land Yacht. The Land Yacht was essentially a carriage propelled by a sail, and it achieved speeds greater than those of a horse-drawn carriage.

Stevin's discoveries in the field of physics are impressive. He demonstrated that objects fall at the same acceleration, independent of weight, half a century before Galileo's publication. However, the Leaning Tower of Pisa story is more entertaining, so this discovery is associated with Galileo. Stevin used the inclined plane as a model system for demonstrating the difference between stable and unstable equilibria. He discovered the hydrostatic paradox that the pressure at the bottom of a liquid-containing vessel depends only on the base area and height of the vessel, and not on its shape. Stevin was the first to explain that the tides arose from an attraction by the moon. This is quite amazing, since universal gravitation was explained only in 1687, in Newton's Philosophiae Naturalis Principia Mathematica, sixty-seven years after Stevin's death.

Stevin's major contribution to mathematics was the decimal notation system, published in Dutch in 1585. His notation for decimal fractions was clumsy, since they were identified not by number place, but by an exponent of ten written in a circle. The decimal point and the idea of number place identifying ten's powers were invented just a few years later. Stevin was the first Western author to mathematically explain the equal temperament of musical scales. Another prominent mathematician of his era, Marin Mersenne, famous for the Mersenne primes, published on this same topic in his Traité de l'harmonie universelle (1636), a few years after Stevin's death.

One belief Stevin held was that our ancestors had knowledge that's been lost to history. This belief goes back to the ancient Greeks, who believed that there was a Golden Age in mankind's distant past. One of Stevin's life-goals was to recover this ancient wisdom. He had the strange theory that our wise ancestors must have spoken Dutch, since he proved that Dutch is the most efficient language for communicating ideas. There are more monosyllabic words in Dutch than in any other European language. If only he had known the Klingon Language [3]!

1. Adam Voiland, "The Zero-Calorie Sweetener Stevia Arrives" (US News and World Report, July 28, 2008).
2. Betsy McKay, "Beverage Wars Take On New Flavor" (Wall Street Journal, July 31, 2008).
3. Google Search in Klingon. jlSuDrup.
4. Simon Stevin (Wikipedia).

August 05, 2008

Splitting Water, MIT Style

In a recent article (Efficient Photovoltaics, July 21, 2008), I reported on an MIT approach for fabrication of efficient photovoltaics. Harvesting energy from the sun is attractive, since there is no carbon emission, and no pollution (aside from what may happen in the manufacture of the particular solar collector. This is an important factor, and it should be considered in all alternative energy calculations.) There's plenty of solar energy available, since the solar constant, the integrated amount of solar energy incident on the Earth at all wavelengths, is a huge 1366 watts per square meter. The actual energy available at ground level is somewhat less, since weather conditions play a factor. Also, there's a variation of about 6.0% over the course of the year, since the Earth's orbit is not circular. The solar constant is about 1412 W/m2 when we're closest to the sun, and still a respectable 1321 W/m2 when we're farthest. The problem, of course, is being able to harvest this energy, which is present at a broad range of wavelengths, and store it until it's needed.

A team from MIT, led by Professor Daniel Nocera, has tackled the major problem of a photovoltaic system; namely, how to store energy for use when the sun isn't shining. They do this by splitting water into its constituent elements, hydrogen and oxygen [1-4]. One way to accomplish this is the direct photolysis of water to produce hydrogen and oxygen. Photolysis itself is nothing new, since it's been done in Nature long before the ascent of man. Considerable research into technologies for photolysis has been conducted since my graduate school days, when titanium dioxide and dye-sensitized titanium dioxide were popular materials. The key to the direct photolysis of water is a catalyst that helps the solar radiation transcend the high activation energy of the dissociation reaction. After more than thirty years of research on the direct photolysis of water, we still aren't seeing deployment of any such systems, but we are seeing a lot of photovoltaic systems.

Nocera's MIT team decided to keep photovoltaics as they are but enable an efficient storage system based on the electrolysis of water. Electrolysis of water is not as easy as it appears to be in textbooks. To electrolyse water efficiently, a highly basic solution is required, as are platinum electrodes. Nocera's team developed a catalyst that allows nearly perfect electrolysis efficiently at room temperature and a neutral pH. The catalyst is formed from a solution of cobalt phosphate that's deposited onto the oxygen electrode (anode) during electrolysis. The oxygen electrode is made from indium tin oxide, and platinum is still used at the hydrogen electrode (cathode). Of course, patent applications have been filed. This research appears as an article in Science [5].

There are several problems with this process, not the least of which is the need for a system to store the hydrogen and oxygen for eventual use, probably in a fuel cell. Another problem is the requirement for a platinum cathode (about $1,600 per Troy ounce). In a previous article (Indium, January 8, 2008), I mentioned a worldwide shortage of indium, which would impact wide scale implementation of this process unless another anode material can be used.

1. Scott Malone, "MIT develops way to bank solar energy at home" (Reuters, July 31, 2008).
2. R. Colin Johnson, "MIT claims 24/7 solar power" (EETimes, July 31, 2008).
3. Elizabeth Campbell, "MIT: We can make solar energy cheaper" (Boston Globe, July 31, 2008).
4. Anne Trafton, "Major discovery from MIT primed to unleash solar revolution" (MIT Press Release, July 31, 2008).
5. Matthew W. Kanan and Daniel G. Nocera, "In Situ Formation of an Oxygen-Evolving Catalyst in Neutral Water Containing Phosphate and Co2+" (Science Online, July 31, 2008).

August 04, 2008

Technology at the Olympics

The 2008 Olympics will begin on August 8, 2008, and they will run through August 24, 2008. I'm not at all interested. What began as well-intentioned sporting competition has been transformed into a media and advertising frenzy designed specifically to make a lot of money for a few people. In this same tradition is the Superbowl, a single football game overblown into a world-stopping event fueled by advertisers willing to pay nearly three million dollars for a thirty second commercial. The snack food industry has cashed-in as well, selling to the nearly hundred million television viewers of this event, many of whom are at organized Superbowl parties. At least the World Series is somewhat interesting, although I concur with the famous assessment that a baseball game is ten minutes of exciting action packed into two and a half hours.

I wrote about the Speedo LZR Racer swimsuit in a previous article (Speedo Fluid Dynamics, June 23, 2008). This swimsuit offers such a technological advantage to speed swimmers that it's essentially required apparel for anyone hoping to bring home an Olympic medal. The LZR Racer was designed using computational fluid dynamics (CFD), and CFD is being applied also to other fluidized sports, such as rowing and cycling [1]. Cyclists have a choice of disk or spoked wheels. There are advantages and disadvantages for each, so CFD is trying to decide which is better. Cyclists face the same fluid problems as swimmers, and others, such as the cross-sectional area presented to air, and the complex mechanical interaction of the cyclist and the bicycle. Such modeling makes CFD of swimmers seem easy.

Traditional performance filming has been brought up to date with advanced video analysis of player's movements. Dartfish USA Inc. has pioneered in sports video analysis in such fields as track, field sports and gymnastics. Players and coaches can get a frame-by-frame analysis of performance, including overlays of previous trials to highlight differences. Underwater cameras are towed alongside swimmers to view form and style, and wireless medical monitoring of athletes is common.

Aside from these competitive aspects of sporting, there is technology that enforces a level playing field. Notably, the starter's gun is routed to individual speakers at runners' starting blocks to eliminate the small delays caused by the speed of sound. Runners' starting blocks are equipped also with false-start detectors. Combined with some software, these detectors can correctly identify those runners who have "jumped the gun." Swimmers have a touch pad at the end of their swim to stop the clock for accurate timekeeping. RFID tags are attached to runner's shoes for timing and record-keeping. Not surprisingly, GPS will track marathon runners [1].

The technologically-enhanced athlete reminds me of the film, Goldengirl (1979, Joseph Sargent, Director). I'll let the Internet Movie Database summarize the plot for you.

"A neo-Nazi Doctor tries to make a superwoman of his daughter, who has been specially fed, exercised and conditioned since she was a child, to run in the Olympics."

1. Ron Schneiderman, "The 2008 Technolympics," Electronic Design, vol. 56, no. 15 (July 24, 2008), pp. 33-40