11 December 2008

Poster Boy

http://online.wsj.com/article/SB122903010173099377.html

Wow.

Oh yeah, GM and Chrysler are bankrupt, just like 80 % of US banks. Piffle.

03 November 2008

GM Sales Fall 45 % YoY; Average 31 % Drop

That's going to leave a mark! USA automobile sales are way way down in October 2008 versus October 2007, as consumer sentiment and credit heads South fast.
About 25 per cent of GM's volume in October 2007 was from leasing, but the auto maker did almost no leasing last month through GMAC, Mr. LaNeve said.
As I mentioned in the comments of this post, when you have to relax your credit requirements to sell cars, you're setting yourself up for trouble in the long-run. I've actually been impressed how well the USA has been holding up under the credit crisis thus far (compared to countries like Iceland, Hungary, Pakistan, etc.) but obviously if the steady drip—drip—drip of job losses continues things will really come to a head. I definitely still stand by my prediction of the US financial sector shrinking in half. The US Treasury bail-outs to date are not solving the trust issue as everyone continues to hoard cash. Tacking on $500 billion a month in federal debt isn't sustainable either.

I'm working towards my doctoral candidacy at the moment so posting will remain sparse until the middle of December.

16 October 2008

Demand Destruction

With oil now hovering around $70/bbl, we've seen a decline of roughly 50 % from the peak this year over only a couple of months. I've stated in the past that I felt the run-up starting in February was largely speculation whereas the push from 2002-2007 was more fundemental. Is this new drop an evidence that 2002-2007 was speculation or is it a fundamental move based on supply and demand? I'm going to argue here that this is, once again, a real move, based largely on demand destruction on-going around the world but in particular inside the United States of America.

Oil consumption in the USA started a downhill roll around December last year. In September, it had a Will-E Coyote moment and rolled off a cliff. It's now in free-fall. Oil demand is highly inelastic in the short-term but their is some phase lag that results in more long-term elasticity. What I mean by that is, it takes awhile for individuals and business to adapt to oil price by drilling for more supply, replacing SUVs with econoboxes, driving slower, etc. But when demand drops fast, the price can drop fast too, because supply is short-term inelastic too.

The latest EIA data estimates US petroleum consumption at 18,865,000 bbl/day. Compared to the same time last year, at 21,024,000 bbl/day, that's a drop of just over 10 % yoy. That's a really big deal. From a GDP-to-oil-consumption relation, it suggests the US is headed into a depression. Even during the 1970s oil shocks the US only experienced drops of 3-4 % a year in consumption.


Figure 1: EIA Weekly Oil Consumption (estimate). The chart shows the story: a gradual downturn and then a sudden drop.

That speculative oil bubble that appeared in February? Gone. Yes, Dorthy, a major recession in the country that consumes a quarter of the world's oil can cause a big drop in price. The USA isn't the whole story, but it is driving this price movement.


The story starts with the OECD countries, and extends to the BRIC (Brazil-Russia-India-China) 'growing' economies. Consumption was pretty flat from 2005-2007, and then it started dropping in 2008. Note that Figure 2 only extends to August, and the US demand dropped over a million barrels a day in September!

Note when looking at these IEA plots, they are all 12-month moving averages, so any moves you see in the curves likely started a few months earlier. Also note that EIA (US DoE) and IEA (France) data are not strictly equivalent, since they use different methods to come up with their estimates.

Oil demand in a number of big economies, e.g. Germany and Japan already fell in response to oil prices last year. They are largely unchanged this year, or indeed up. France is already headed back up, as are a number of smaller countries. In North America, Canada's demand is flat and Mexico is well up.

The big weakness in Eurozone oil consumption is coming from Italy, and the UK. However, their drops are still minor in comparison to the story in the `States. If we look over a longer term view, the IEA consumption data (Figure 5) suggest US demand was flat for 2005-2007, so perhaps this recent drop is just the USA making up for lost ground. Looking at US consumption, and world consumption, the two graphs have very similar shapes.

Figure 5: IEA oil consumption data for USA. Future versions of this graph will show a sharp downturn in September.

The BRIC countries are all less transparent. As a consequence of that, releases of their numbers is even slower than OEDC results we see above that are a couple of months behind. In the same manner that the USA drives OEDC and the other countries are just noise, China drives the BRIC numbers. Unfortunately, we don't have any numbers yet on the Olympic effect in China. Vehicle-kilometers in China was probably way down in August, and that's going to have a short term impact on oil demand. China's skies are once again as smoggy as ever before, so unless they've also changed their stockpiling strategy, this is going to have a short-term impact.

Figure 6: China oil consumption (and projections) beside Vehicle Sales in China. Taken from Malcolm Shealy's (Alacrites Inc.) presentation hosted on IEA website.

So outside of a massive step up in China in August and September (which is unlikely) the only other likely variable to examine is oil supply. Production data (Figure 7) indicates supply is actually up this year. This isn't unexpected. It took a couple of years after oil took off in 2002 for the oil companies to get enough confidence to jump into new projects, but a number of projects started in 2003-2004 are now up and running contributing to supply. While the new oil price may choke off new projects, there's still a number in development to stave off the depletion of existing fields.
Figure 7: World oil plus condensates production, 2005-July 2008 (Data from EIA).

So, in conclusion, I think the supply and demand data show pretty conclusively that supply is up, demand is down, hence the price of oil is dropping like a rock. In fact, we have this event occurring at the same time as a commodity is being popped, so the effect is especially strong. This is a good thing. The world economy and the USA in particular is hurting thanks to the bubblicious real-estate fiasco and low energy prices will help conventional economic activities drag us back out of this big hole. I think the housing bust will be extended — adjustable-rate mortgage resets in the US are only about half-way through the pool of potential defaults — and the world economy is not decoupled from the USA so expect fallout damage to hit exporting countries in manufacturing (China, Japan) and commodities (Canada, Russia, Australia) in waves going forward. The US consumer is going to have to live within his or her means, and hopefully revert back to being a citizen first and consumer second.

Can OPEC squeeze the price by implementing quotas? Yes and no. Production of oil products is up big in 2008. There's slack they can take out of the system. Will they? I don't know. OPEC countries in general have not been diversifying their economies so they will have to do something or face the potential for unrest as revenues come back to earth. Basically it comes down to whether or not Saudi Arabia wants to arrest the fall in price, or if Russia as a non-OPEC producer decides to sacrifice some cash flow or not. If I was the Saudi's, I would be thinking about whether or not OPEC as an organization has outlived its usefulness. A Saudi-Russian oil alliance would probably be more practical, and effective.

Looking into the future, it's likely that demand in the USA will be suppressed for awhile, so as a result, investment in new capacity is going to drop especially in conjunction with higher credit rates. Speculative plays in oil shale, gas-to-liquids, and coal-to-liquids are effectively dead. Remember that in order to develop alternative fossil fuels to oil you need to not just cover the marginal cost of production, but also amortize the capital costs, and the capital costs in non-conventional oil are usually very high. I would also expect new deep water plays (Brazil, Atlantic Canada) to come to a grinding halt, but existing ones should go ahead as the cost of production is not excessive.

The most marginal cost-of-production oil in the world is Alberta bitumen, at around $50/bbl. That's not accounting for capital costs at all, which have been in the range of $150,000 /bbl/day capacity recently. Lower than that, and salaries will have to come down to lower costs. Taking that capacity out of the world's supply would only knock off 1.3 million bbl/day. Alberta's provincial government may be facing the end of yet another oil boom with little to show for it.

16 September 2008

The D-word

Hmm... so the price of housing and automobiles has been failing for awhile now. Now that another wave of the credit crisis is causing a commotion, the associated margin calls seem to have deflated the commodities bubble. So are we going to see a general fail in prices everywhere?

Are we now in a deflationary period? I was certainly one who believed that the USA would do anything it could to inflate its debt away. It was the logical strategy given the general level of in-indebtedness at all levels of US society. However, the destruction of rent-seeking capital seems to be pervasive and occurring in massive quantities. The Austrian school certainly says that deflation is exactly what we should expect at the end of a credit-driven bubble. The huge amount of leverage involved and the general cross-connectedness of the Credit Default Swaps (CDS) market seems to make the domino-effect of financial corporation failures unstoppable. In fact, the US Federal Reserve seems to have largely lost control over prevailing interest rates.

This isn't like the old inventory-driven recessions of the past thirty years. Structurally, it looks closest to the run-up to the Great Depression around 1924-29 when all the Austrian banks went down.

Fortunately I don't think we are likely to have another coincidental Dust Bowl event (and farming is not quite as important anymore) to match the horrible times of the Great Depression. However, if the financial sector were to shrink by half, bringing it back in line with historical norms, that would certainly result in a GDP shrinkage of > 10 %, meeting the technical definition for a depression.

Canada isn't going to side-step the fallout on this one. Any large scale demand destruction in the USA will hurt producers up here badly. Stephen Harper was bright to call an election when he did, but he would be well advised not to stick his head in the sand. The USA is inevitably going to become more mercantilist so let's move to get ahead of that, shall we?

I continue to believe that the way out of this mess is a drive to shift the developed economies of the world away from fossil fuels and into renewable sources of electricity. Any such drive would generate a lot of high-quality jobs, present many R&D opportunities for the productive employment of capital, and achieve some enormous environmental and security side benefits.

03 September 2008

Nanoparticle LiFePO4 Batteries

The lithium-ion batteries based on phospho-olivine (i.e. LiMPO4, where M = {Mn,Co,Fe}) crystalline structure have been the subject of a great deal of research over the past decade. A recent paper in Nature Materials from Gibot et al. has demonstrated some of the developments in the area and I'd like to rehash them here [1]. The paper demonstrates fabrication of a single phase LiFePO4 with very small particle dimensions. It's not an, "Oh my god what an amazing engineering development paper," but rather one of scientific interest to elucidate the difference between two-phase and single-phase Li-ion batteries.

LiFePO4 thus far seems to be the most impressive performer, especially from a safety perspective. It is produced entirely in solution (e.g. a beaker) by a chemical recipe. I don't know what the yields are like but the nature of the production method implies that it can be undertaken in large vats on an industrial scale.

Unfortunately, it suffers from poor conductivity characteristics. Two main approaches have been made to improve the conductivity of LiFePO4: (1) to coat the particles with a thin layer of amorphous carbon, and (2) to manufacture the LiFePO4 in the form of small nanoparticles (~ 40 nm average axial dimension). Of course, both approachs can be combined.

Adding carbon improves the conductivity but adds an, "electrochemically inactive," layer to the cathode material, hence reducing performance by adding dead weight to the battery. One can imagine that when you take a mass of Li-ion particles and sinter them together to form the cathode, if they've all been coated with carbon then there's an electronically conductive pathway from any buried particle to the electrolyte.

For the nanoparticle approach, presumably the higher ratio of surface area to volume reduces the ion diffusion length, but the literature also suggests that the introduction of defects to the nanoparticles may also improve conductivity. I know from experience that stacking faults (such as twins) can act as diffusion pathways for reaction species in solid-state reactions.

Normally the cathode material is really LixFePO4, where x = 0.5 - 0.75. This is (at least partially) as a result of the boundry between crystallites being composed of an extremely lithium poor phase (x ~ 0.03). The primary advance shown in the Gibot paper is that they made the nanoparticles small enough that only a single phase is found in each particle. To explain, if you are familar with the difference between monocrystalline and polycrystalline silicon solar cells, the sub-40 nm LiFePO4 nanoparticles are monocrystalline. Particles in the range of 100 nm are polycrystalline and hence have the low lithium phases present at the boundaries of each crystallite. Note that there's no fundamental electrochemical advantage to the monocrystalline approach as far as I know.

Figure 1: Potential-capacity and capacity-power curves for nanoparticle LiFePO4 (reprinted from [1]). In the top figure, 'C' represents a charge curve and 'D' a discharge curve for carbon-coated LiFePO4 nanoparticles. The number after the letter is the number of hours the discharge took place over. I assume '2D' is the discharge curve for thirty minutes.

I infer from the paper that a big difference here seems to be in the lithium loading. Gibot showed by a variety of methods that their nanoparticle was loaded with more lithium (x = 0.82-0.92). However, their discharge performance curves aren't actually more impressive than existing LiFePO4 batteries with larger particles. Existing batteries have flatter discharge curves from what I've seen.

The real advantage for these monocrystalline nano-LiFePO4 is likely to be reversibility. As I've discussed previously, the volume of the crystal changes from lithuim insertion to deinsertion. This introduces strain into the crystal and after many cycles defects will form and degrade performance. However, in a monocrystalline material there's not a lot to break. The nano-LiFePO4 does have some substitution defects (Fe where Li should be and vice versa) but without the crystal boundries the defect density is likely to be lower overall.

Another potential advantage for the nanoparticle approach is that it requires less in the way of process temperature (108 °C versus 500 °C over 24 hours) compared to the traditional approach. That should make the manufacturing process less energy intensive and less expensive.

[1] P. Gibot et al., "Room-temperature single-phase Li insertion/extraction in nanoscale LixFePO4", Nature Mat 7 (2008), 741-747.

21 August 2008

On the Topic of That Oil Bubble

About that oil bubble I was talking about a few months ago:

One trader held 11% of Nymex oil contracts: report

Hmmm...

I find it hilarious that almost all the big-wig economists around were proclaiming their was no bubble (e.g. Krugman, JDH). I think it was pretty obvious, as soon as NY.MEX futures volume exploded in February 2008 and the successively forward price of oil futures switched to all increasing rather than decreasing (as it was 2002-2007) that we were seeing some serious speculation games.

29 July 2008

Phase-Change Thermal Storage Materials

Energy consumption for buildings can be divided into four general categories: electricity for devices and appliances, hot water, space heating, and space cooling. Of these, only electricity needs to be provided all the time. The applications that require heat do not really need to be filled immediately, since some fluctuations can be permitted. Thus these applications are potentially a well of deferrable demand that can be used to compensate for the intermittent nature of renewable power sources.

A very large proportion of the energy budget for a home goes into space heating & cooling and hot water. According to EREN's Buildings Energy Data Book, 55.2 % of residential energy consumption goes into the big three. Given that residential is about 20 % of the energy pie, that suggests thermal storage could transform about 10 % of our total energy requirements (or ~ 15 % of electricity production) into deferrable demand. That's a big hunk, and would provide a ton of breathing room to renewable power. Commercial and industrial uses of thermal storage are likely to come before residential, and they would provide additional capacity to thermal storage.

Of course, we as humans don't really like our nice cozy interior environment to have boomeranging temperatures controlled at the whim of the power utility. A potential solution is to introduce some thermal storage on-site which can act as a reservoir of heating or cooling. I have previously written that the benefits of thermal storage are underwhelming next to increased insulation, and that remains largely true. However, newish thermal storage mediums are looking more impressive. Furthermore, any dwelling needs some level of air exchange to flush odors and CO2 and thermal storage can be retrofitted without completely gutting the interior of a house or apartment block.

Outside of people living in off-grid housing, there currently isn't any real incentive to install such equipment. However, if we look forward into the future of electricity production, the difficulties solar and wind face with intermittency feature large. The key prerequisite to making thermal storage workable is a regulatory structure that pays a premium to electricity consumers who are capable of deferring their demand to some later time (say a range of 1-4 hours) as a service to the electrical utility.

For any thermal storage medium, one wants a material with a high heat capacity so that the energy density is high. In addition, one generally wants a material that has high thermal conductivity, so that the power (Watts/second) that can be applied or extracted is high. Last but most important, the material has to be inexpensive.

In order to develop a material with an extremely high heat capacity, it is often useful to find one that has a phase change (i.e. solid to liquid) around the desired operating temperature. The transition used is always the solid to liquid phase because gases just don't have the desired density.

For example, the amount of energy required to freeze water is really quite amazingly high. If we were to build a water tank for cooling applications and ran it from 1 — 16 °C, we would have a energy density of 4.184 kJ K-1 kg-1 · 15 K = 62.8 kJ/kg. By way of comparison, the heat of fusion for water is 333.6 kJ/kg, or the equivalent of heating water by almost 80 °C. If we freeze that water, and operate from -1 — 14 °C, the stored heat energy density rises to 396.3 kJ/kg, an improvement of 530 % in spite of the fact that ΔT remains identical.


Figure 1: Enthalpy of Water from -25 °C to 125 °C.

By operating across a phase change, one needs less thermal storage medium and a smaller tank which is an economic advantage. It also allows one to store more heat across a given temperature gradient, which provides a boost to the efficiency of the heat engine supplying heating or cooling.

We can classify phase-change materials into three general categories depending on their application:
  1. (0 — 15 °C) Space cooling and refrigeration.
  2. (40 — 65 °C) Space heating and hot water.
  3. (> 300 °C) Thermal storage for electrical power plants (i.e. concentrating solar thermal).
Both the space cooling and heating categories are essentially fulfilling the same function: storing energy at the residential or commercial level. Thermal storage for power plants is a slightly different issue. Briefly, if you overlaid a graph of electricity demand and solar radiation, you would notice a phase delay of about two hours from peak sunlight to peak demand. Thus, to make a solar-thermal power plant capable of 'peaking' (i.e. providing the expensive electrical power capacity above base-load) you need a little bit of storage, just to cover 1—4 hours. For this, molten salts provide the best mechanism proposed to date.

There are a number of general categories of materials for phase-change thermal applications: organic materials which are typically oils, water and hydrated salt solutions, and salts. Organic compounds and saturated salts are used for low temperature (< namespaceuri="urn:schemas-microsoft-com:office:smarttags" name="stockticker">

Material

Melting Point

(°C)

Sensible Heat

(kJ kg-1K-1)

Latent Heat of Fusion

(kJ/kg)

Thermal Conductivity

(W m-1K-1)

Space Cooling Materials

Water - H2O

0

4.2

334

2.18 (ice)

Paraffin C14

4.5

-

165

-

Polyglycol E400

8

-

99.6

0.187

ZnCl2·3 H2O

10

-

253

-

Space Heating Materials

Paraffin C­22-C45

58-60


189

0.21

Na(CH3COO)·3 H2O

58

-

264

-

NaOH

64

--

227.6

-

Electrical-quality Heat Storage Materials

31.9 % ZnCl2 + 68.1 % KCl

235

-

198

0.8

NaNO3

310 (d 380)

1.82

172

0.5

KNO3

330 (d 340)

1.22

266

0.5

38.5 % MgCl + 61.5 % NaCl

435

-

328

-

NaCl

800


463-492

5


One thing that really stands out in the literature on phase-change materials is how poorly characterized so many materials are. A great number of salts (high temperature) or hydrated salts (lower temperatures) form eutectics with other salts, allowing hybridization of thermal properties. Eutectic means two materials form a crystal alloy at a given concentration of each material. Hence the number of potential permutations is enormous. The field of organic materials is similarly enormous.

In the case of the salts, the phase change materials are highly corrosive, so it would be poor design practice to use them as the working fluid. Rather, one uses a common well-established working fluid (such as water). On the other hand, hot water is pretty corrosive as well, while oils generally are not.

Now, if we go back to the original criteria for thermal storage, recall we want both high heat capacity for energy, but also high thermal conductivity to provide power. If we compare the thermal conductivity of copper (400 W m-1K-1) to that of phase-change materials, we see that the thermal storage materials are not very conductive of heat.

The obvious solution is to build some sort of composite material where you have a high thermal conductivity lattice paired with a phase change material for heat storage. The simplest example would be a water tank equipped with aluminium fins. For molten salts, this becomes more challenging as the material has to be refractory (i.e. does not react with the molten salt). The ideal choice is typically carbon, which pairs strong covalent bonds with exceptional thermal conductivity. Graphite has the highest thermal conductivity (around 1950 W m-1K-1) of any material around (exception: superfluid helium) but only along the plane of the sheets.

In 2000, Fukai et al. proposed using a structure of carbon fibre inside a tank of paraffin as a phase-change composite [2]. They found that by including a volume fraction of 2.4 % carbon fibre they could improve the thermal conductivity 24-fold to 6.25 W m-1K-1. However, carbon fibre is relatively expensive.

A cheaper alternative would be to used expanded graphite as the lattice material instead. Think perlite/ vermiculite, but composed of carbon; it is similar to the anode of a battery. A recent study explored the potential for using expanded graphite for use with molten salts for high temperature solar-thermal applications [3]. This is the first study to examine carbon paired with molten salts to my knowledge. The approach of expanded graphite requires considerably more graphite by weight (20 % for most of the results) which in turn will reduce the energy storage density. The results demonstrate that NaNO3 and KNO3 phase-change materials in a matrix of expanded graphite had a thermal conductivity of around 4 W m-1K-1, or roughly a 8x increase. The authors state that this is still below their desired figure at a given graphite concentration.

In conclusion, the most heartening aspect of phase-change thermal materials is the shear variety of options available. The development of composite phase-change materials is interesting but evidently proceeding slowly. The carbon fibre approach seems to offer superior performance for a given concentration almost certainly because it provides a continuous conduction pathway for heat along the length of a fibre. The expanded graphite is by nature, a more chaotic material so there will be many small zones where heat is forced to travel across the less conductive phase-change material. For the spacing heating and cooling applications, feasibility is largely a function of regulatory structure. It's only worth doing on a large scale, so the political will would have to be present to move forward.

References
[1] Belén Zalba et al., Review on thermal energy storage with phase change: materials, heat transfer analysis and applications, Applied Thermal Engineering 23(3): 251-283.

[2] J. Fukai et al., Thermal conductivity enhancement of energy storage media using carbon fibers, Energy Conversion and Management 41(14): 1543-1556.

[3] S. Pincemina et al., Highly conductive composites made of phase change materials and graphite for thermal storage, Solar Energy Materials and Solar Cells 92(6): 603-613.

Update: Density numbers for the materials listed in Table 1. All numbers in kg/m^3.
Water: 998 (@ 20 ^C) / 917 (@ 0 ^C)
Parrafin C14: n.a.
Polyglycol E400: 1125 @ 20 ^C
ZnCl2*water: n.a.
Parrafin C22-C45: 0.795 @ 70 ^C
Na(CH3COO)*water: 1450
NaOH: 1690
31.9 % ZnCl2 + 68.1 % KCl: 2480
NaNO3: 2260
KNO3: 2110
38.5 % MgCl + 61.5 % NaCl: 2160
NaCl: 2160

There are many more in the references.

16 June 2008

Cost of Speculation

An excellent article on speculation in the commodities markets from the German newspaper Spiegel:

The Attack on Prosperity: How Speculators Are Causing the Cost of Living to Skyrocket


I don't think there's any doubt there's been huge run-ups in commodities starting in August 2007 and another bump in February 2008. To my mind, this is another bubble being blown by investment bankers desperate to mitigate or avoid realizing their losses in the sub-mortgage crisis. There's clearly no willingness to shine any light onto the books of the big banks, since probably 50 % of the top ten would be insolvent if they had to truly account for the value of their highly-leveraged mortgage-based instruments of financial suicide.

Like all bubbles, this one will probably go on for longer than seems possible in spite of the clearly unsustainable nature of the beast. The investment losses, in the end, will only be that much bigger as a result. That said, I think this bubble will burst a little faster than the housing one. For one, turnover is much faster than housing. The price of commodities is rising much faster than housing did so we'll reach the tipping point that much faster.

03 June 2008

A Primer on Desiccation and Cryopreservation
(a.k.a. Storing Seeds)

I would like to take a brief break from energy issues for a moment to discuss a different topic: the science of freezing plant seeds to preserve them for the future. The cultivars of produce that we purchase in the supermarket are typically pretty bland, especially if they have to be shipped by reefer truck from California or worse, Chile. On the other hand, I can buy heritage vegetables from my local farmer's market, figure out which varieties I like, and seed them. This isn't possible for all varieties (i.e. root vegetables like carrots) but the results can be impressive. The other goal here is to get a good yield, so that most of your seeds sprout, by following good practices.

Most of this comes from knowledge I gleaned from a graduate-level mechanical engineering course I took on cryogenics. (Note: cryogenics is the science of liquefaction and refrigeration at temperatures far below zero; cryopreservation is the science of freezing biological tissues with minimal damage; cryonics is freezing dead people's heads. Can you spot the quack?) This is not to say that I have years of experience successfully freezing seeds. This is simply some of my scientific knowledge of how one should approach the problem. I had a yield of 7.5 sprouts from 9 seeds planted from a heritage tomato I bought and seeded.

Cryopreservation can be used to freeze (small) whole organisms — I have read in books on cryopreservation that commercial goldfish can be frozen by liquid nitrogen, shipped by air from China, and defrosted. About half live, the rest die if you look at them wrong when you take them home. Cold-water fish are probably uniquely well suited to protection from freezing damage, since they have natural protective agents.

The basic problem with freezing tissue is that: 1.) water increases in volume when it freezes, and 2.) water forms potentially sharp crystals (dendrites) when it freezes. If an ice crystal punctures a cell wall, it tends to cause irreparable damage and the cell will lyse (die) upon defrosting.

It takes a great deal of energy to thaw ice — the equivalent of heating water by 80 °C, actually. Freezing ice requires taking away the same amount of energy. When water freezes, the process always starts at some local density variation (homogenous nucleation) or some feature such as a protein (heterogenous nucleation). When you freeze slowly only a few nucleation sites form and then the bulk of the water amalgamates onto existing crystals. This leads to a relatively small number of large crystals. Faster freezing encourages more nucleation sites to form, so the end product is many, small crystals. Hence the plunge into liquid nitrogen as the basis of most cryopreservation techniques.

There are two basic methods of cryopreservation: replacing the water with another fluid or solution (such as ethylene glycol) which vitrifies (freezes as an amorphous glass) rather than forms crystals, or dehydrate and freeze. Since I'm talking about plants here, we're going to ignore the vitrification process, since it is technically much more challenging and impractical outside of a laboratory.

The approach then, is to dehydrate the seeds before freezing. This acts to increase the concentration of solutes (sugar, etc.) in the cytoplasm which in turn tends to inhibit the formation of large ice crystals. It reduces the likelihood that the cell walls won't be burst when the water expands as it freezes. Aside: a lot of the literature on cryopreservation discusses the concept that intracellular ice observed in a cell tends to imply that the cell is not going survive thawing. Functionally, what this really means is that if ice crystals large enough to observe with an optical microscope form inside the cell, then the cell will probably die.

Most people recommend drying seeds before storage. This page from Colorado State states that most seeds should be reduced to 8 % moisture content before storage. The typical process is to air dry over a week or two. If you live in a wet climate or are impatient I would think, however, that it is safe to use a food dehydrator to speed up the process. The keys would be don't allow the temperature to rise too high such that it denatures protein (< 45 °C to be safe) and don't overdo it. Plants can tolerate greater dehydration than animals due to the cellulose in their cell walls but you can still kill them. Unfortunately it's quite difficult to assess the moisture content of a seed. Also note the discussion on 'hard seed' at the above link.

After dehydration, you want to put the seeds into a dry environment so that they don't try and germinate. For this, you need an air-tight box (a desiccator) and some material that strongly adsorbs water (a desiccant). For home use, a hundred-dollar laboratory desiccation cabinet is overkill. A glass mason jar with a rubber seal and a tight clamping mechanism should work fine. You may want to grease the rubber gasket with a silicone grease so that it remains pliable in the freezer. When greasing a gasket, you want to work the grease into the material, and then wipe it clean with a paper towel until it no longer feels tacky. Avoid greases with petroleum (i.e. Vaseline) in them as they will break down rubber polymers. Ideally you should also wear latex or nitrile gloves to protect the gasket from the oil on your hands but that would be overkill for non-vacuum applications.

All you need then is a desiccant to put into your cheap desiccator before you pop it into the freezer. You've probably seen a desiccant before in a pill bottle marked "Silica gel - Do Not Eat."
Desiccants are hydroscopic (they adsorb water very readily) so they will basically suck all the water out of the atmosphere of your jar. Personally I would probably want to buy a cartridge (such as this one from Fisher Scientific) for convenience but their are likely cheaper suppliers. Most desiccants can be regenerated by putting them into an oven and baking them above the boiling point of water for awhile. This will drive away the water they have adsorbed onto their surface. Keep in mind that if you leave them exposed to atmosphere for very long they will fully adsorb and no longer fulfill their function.

As I mentioned previously, one of the primary damage mechanisms in cryopreservation is the sintering of ice crystals during thawing. Hence repeatedly defrosting and re-freezing is very harmful and likely to reduce your yields significantly. As such, you need to store them in a deep freeze or freezer with no automatic defrost cycle. Incidentally, this is why stuff stored in a 'frost-free' freezer tends to turn into mush after enough time.

Commercial cryopreservation processes often use microwaves to defrost material quickly. I wouldn't recommend this for home use, however. Home microwaves are generally too powerful and will cook the seeds. Turning down the power won't help, as microwaves simply run 1/10th the time when set to 10 % power. The obvious solution would be to soak them in tepid water. This will unfreeze them fast and start the rehydration process as well, encouraging them to germinate.

If you observe the above steps, and remember to sterilize your soil mix in the oven before you plant your seeds, I think you'll be pleasantly surprised at just how many of your seeds germinate.

21 May 2008

Photovoltaic Update

According to photovoltaic industry analyst SolarBuzz, total PV installations in 2007 were 2826 MWpeak, representing a growth rate of 62 % (!!!) over 2006 . By way of comparison, Worldwatch claims that PV installations were 2935 MWpeak in 2007 (hat tip Peak Energy).

Germany continues to be the main driver for the PV industry, although Spain is now coming on very strong with their subsidy program as well. Ontario now has a similarly (over) generous subsidy program in operation so we are starting to see many announcements for PV power plants there as well. Japan is falling behind as their subsidy program was for a fixed capacity (i.e. 100,000 homes).

Thin film is growing much faster than poly- and mono-crystalline Silicon. SolarBuzz claims growth of 123 %, from 180 MW to 400 MW of installed capacity. Since a lot of the newer thin-film capacity is either CdTe or microcrystalline Silicon rather than the simpler amorphous Silicon (which happens to degrade quicker), the 400 MW number is probably actually 'firmer' than the 180 MW deployed in 2006.

The current leader of the direct bandgap thin film solar industry is First Solar of Ohio. The manufacturer of CdTe thin film solar cells has gone from $67 million in sales in the first quarter of 2007 to $197 million in the first quarter of 2008. Net profits increased 830 %, from $5 million to $46.6 million. With profits being about 25 % of sales, they have a much higher profit margin than most industries, including any oil major. That tends to imply they will be able to grow their production capacity very, very fast. They are currently advertising for 105 positions. According to the above report, First Solar is selling their modules for $2.45/Wpeak, and since the cost of sales is 47 % of total sales, that implies a cost of $1.15/Wpeak.

It will be interesting to see how the CIGS manufacturers stack up. As long as the price of solar is supported by overly generous government subsidies we aren't going to see technology sorting out winners and losers in the market, however.

Update: in case you wonder what $1.15/Wpeak means, I calculate that for an environment with a capacity factor of 0.2 (i.e. San Franciso), when amortized over 25 years it works out to under $0.04/kWh. Each peak Watt will average 1.6 kWh/annum (max of 1.75 kWh in first year, dropping by 20 % over 25 years). Assumptions: energy inflation of 2.5 %/annum, general inflation of 2.5 %/annum, interest on financing of 6.0 %/annum. You have to add in all ancillary costs onto that four cent figure (such as frames, inverter, etc.) but the point remains obvious.

28 April 2008

Magna Proposes Plug-in Hybrid

An article by George Keenan in the Globe and Mail has revealed that Magna International Inc., the huge Canadian automobile parts manufacturer, is proposing to build a plug-in hybrid vehicle by 2010. The owner, Frank Stronach, was interviewed by Keenan and stated,

New technologies such as hybrids offer a great market for Magna's parts and its ability to build complete vehicles, Mr. Stronach said in an interview, noting that cars with Magna-developed hybrid engines are already being tested in Europe. "You don't have to be a great scientist to know that we're going to be out of oil sooner or later," Mr. Stronach said.

The effort is being fronted by Magna Steyr in Austria. Steyr actually builds complete vehicles that are badged under other manufacturers. Magna is budgeting $30 million for the effort, which is hardly insignificant when you consider it's only over two years, even for corporate research (i.e. no cheap graduate student labour). Overall I think that their project will be late, however, unless their goal is simply 'proof of principal'. That said, Magna claims to already be working on the project, so who knows how long they've kept this under wraps. Magna is also involved with the Tesla Roadster.

My personal suspicion is that Magna is looking to get into the hybrid parts business, and developing a complete vehicle is a way for them to achieve this. Magna has previously stated that they think hybrid sales will top 1.7 million a year by 2013. Right now Ford's hybrid efforts are stymied by the fact that most of their supply chain is Japanese. GM will have similar issues with the Volt.

This project is largely simply good business practice by Magna in the face of increasing fuel costs. The article makes clear that the viability of a plug-in hybrid is a function of the difference between the price of oil and the price of gasoline. Fortunately, plug-in hybrid technology can be rolled out in small increments, gaining market share first from the early adopters and then through economic advantage as the price of batteries declines and gasoline increases.

I did take a look at the comments in the article (probably a mistake), and I was a little annoyed to see that people were suggesting that we didn't have the electricity or power would be supplied by "Ohio coal plants." Practically, all the early adopters of plug-in cars can be recharged by the idle capacity that exists and night-time. Giant thermal plants aren't easily throttled down, so there is typically a surfeit of power that is otherwise wasted at night. In the future, plug-ins can be aggregated to act as 'deferrable demand' for the power utilities, smoothing out the intermittency of solar and wind power, and allowing greater market penetration from those technologies.

23 April 2008

Recycled Steam

So I'm in the process of moving (yet again), which has put the brakes on me producing another technical post. So, in lo of that, I think I'll beat on a journalist. Yeah I know, it's like killing kittens, they're just so cute and helpless, but hey, if it helps me vent some frustrations, it's all good.

So I recently read in The Atlantic (home of the esteemed James Fallows) an article by Lisa Margonelli on combined heat and power. In general, this article is fairly good, especially the second half. However, I still saw some paragraphs that rankled. Let's dive in, shall we?
The U.S. economy wastes 55 percent of the energy it consumes, and while American companies have ruthlessly wrung out other forms of inefficiency, that figure hasn’t changed much in recent decades.
and, later,
For the better part of a century, we’ve gotten electricity from large, central generators, which waste nearly 70 percent of the energy they burn.
A ha! Yes, the ever popular confusion regarding the difference between useful work and waste heat. Once is forgivable as editorial discretion, but twice is a pattern. Let's take nice, high-pressure and hot steam and pass it through a steam turbine. Surprise, we lose heat and pressure from the steam in order to run the Rankin cycle. You can take that steam and pass it through another turbine, but the 2nd cycle will get a lot less electricity out for the same capital costs. The final potential use then is to take the latent heat from the low-quality steam and dump it somewhere: process heat for drying , heating the factory floor, or speeding up some chemical reaction.

Ok, so definition time: this article is about combined heat-and-power (CHP), but you won't see those words in this article. In fact, the article only talks about the other way around — capturing waste heat to make electricity.

CHP usually aims to take an industrial activity where you burn a fossil fuel for process heat, and run the fuel through an electricity generation process first and use the waste heat for the process. Margonelli, on the other hand, provides an example where electricity is used for heating. This isn't common, because electricity is still far more expensive than natural gas on a pure dollar per Joule basis. In fact, it's only used when you need either extremely high purity or extremely high temperatures. Such as,
Heat, which in some industrial kilns reaches 7,000F, can be used to produce more steam.
tungsten tool making for one. Needless to say, most industrial activity isn't involved in the manufacture of refractory materials, zone-refined silicon, etc. This is my major problem with the article. The example provided isn't very representative of industrial uses of heat. What can be economic for a specialty steel refiner probably isn't for an ethanol plant or oil refinery.

TANSTAAFL (There Ain't No Such Thing As A Free Lunch) applies here as much as anywhere. For some plants, better insulation may be a better buy.
In some industries, investments in energy efficiency also suffer because of the nature of the business cycle. When demand is strong, managers tend to invest first in new capacity; but when demand is weak, they withhold investment for fear that plants will be closed. The timing just never seems to work out. McKinsey found that three-quarters of American companies will not invest in efficiency upgrades that take just two years to pay for themselves.
This says a lot more about business leaders' acumen than the particulars of a efficiency upgrade. If you can't generate some cash flow to invest in capital equipment (and that is what we are discussing here — a gain in productivity) during a boom you probably aren't going to survive the inevitable bust. Why the emphasis on capacity growth? Are the CEOs really that concerned about losing market share? Or is this just an example of knee-jerk brownian attitudes? Or are executives just really dumb? (Don't answer that.)

The other giant impediment to CHP the article sort of dances around but never really addresses. In the giant race to the bottom of labour costs (i.e. off-shoring), it is a pretty big gamble for a power plant to setup for combined heat and power and then hope that their customer will still be around in five years. Low-grade steam isn't something you can pump around the state to find a new customer because you'll simply bleed it all off as parasitic losses to the pipeline. So I think the emphasis on Free Trade which has introduced such volatility in the cost of labour is probably a big part of the general failure of CHP to have a big impact on our energy economy.

The other reason CHP hasn't really taken off is that natural gas hasn't turned out to be as cheap or as fungible as expected, and it's the only fossil fuel that's really clean enough to run with decentralized power and easily pipelined. Coal isn't.

02 April 2008

Boom and Bust Stifles Non-resource Economic Activities

Canada is, by in large, a resource-based economy. There's significant manufacturing in Ontario and Quebec, but most of the country operates on the principle of collecting natural resources and selling them to more populous countries. Basically we take advantage of our low population density relative to the fact that we're the 2nd largest country in the world. Being a resource economy comes with the drawback that you live at the mercy of the large economies of the world.

One often heard complaint is that we don't process our raw materials to add value to them, to any significant degree. In British Columbia in the 1990s the cry was over raw logs being exported to Japan without any milling.

The Globe and Mail's Inside Energy Blog had a post up recently on how the provincial and federal governments were showing no interest in pushing bitumen producers towards upgrading the product to synthetic crude in Alberta. Rather, they pipeline the bitumen (and presumably some solvent) South to the terminals around Chicago so that it can be upgraded there. The obvious complaint by unionized workers is that it should be done locally.

Technically, upgrading the bitumen elsewhere makes Alberta's carbon dioxide emissions look just a little better, but the net addition to the atmosphere is still going to be the same. The reason the corporations might want to do this is pretty obvious: labour is very expensive in Alberta, and much cheaper in the American Midwest.

The problem with this whole concept of trying to encourage a "value-added" industry is that it simply cannot survive the boom-and-bust resource cycle. To put it simply, if you are a manufacturer, would you want to put your operation in Alberta with the knowledge that in a boom all your costs would inflate like crazy and your employees decamp for the oil patch? And in a bust, the USA is likely in a recession, so you hurt then too. It seems like a no-win situation.

Peter Lougheed (famous ex-premier) is well known for wanting to develop a plastics industry in the province, but I simply don't see it happening without a radical change in the royalty structure. The development of "value-added" industry would require provincial governments to apply a brake to resource development when commodity prices are high, something they generally don't have the discipline to do.

31 March 2008

Lifetime Electricity Costs for High-end Video Cards

So a few weeks ago I discussed how we're going to need a greater emphasis on low power consumption electronic design in the future. I thought it would be helpful to actually put down some numbers and see how big a cost power consumption is for high-end computer equipment. On some equipment, especially that used in server farms such as hard drives, power draw is already an important metric. For other components, we often are not even given consumption numbers by the manufacturers.

High-end graphics cards are becoming particularly power hungry, as this chart by anandtech.com shows. The two current best-performance at a reasonable price-point video cards are the 8800GT by NVidia and Radeon 3870 by ATI. Even idling, these systems chew through an impressive amount of power — 165 W in the case of the NVidia product and 125 W for the ATI one. I don't know if anandtech.com's methodology is normalized for the efficiency of the power supply or not, but regardless they are burning a lot of power for doing next to nothing. Average residential electricity prices are up to 10.4 ¢/kW·h now in the US for 2006.

Let's assume that the lifetime of a card is always on at idle for two years. "Idle" in this case would basically extend to using any 2-D application, such as browsing the internet or using a word processor. Only 3-D accelerated games are going to stress these systems to any significant degree.

Video Card

NVidia 8800GT

ATI Radeon 3870

Initial Purchase Price

$260

$255

Idle Power Consumption

165 W

125 W

Expected Lifetime

17500 hours

17500 hours

Lifetime Est. Power Consumption

2187.5 kW·h

2887.5 kW·h

Lifetime Electricity Cost

$300.30

$227.50

Total Cost

$560.30

$482.50


As we can see, the operational cost of these two cards is roughly comparable to the purchase price. Even with the modern price of gasoline, automobiles don't have such a high proportion of their lifetime cost associated with fuel.

Both of these cards come with 512 MB of fast memory spread out over eight chips. However, a single buffer at 1280x1024 pixels with 32-bit resolution requires less than 6 MB of RAM, so there's no need to maintain all that memory powered on for the vast majority of computer applications. One chip should suffice for a triple buffered display.

Similarly, it should be technically possible to clock down the processor and bus speeds dynamically to reduce the power consumption of the GPU. Alternatively, one could embed a slow GPU for 2D applications. Most motherboards are available with on-board video on the Northbridge chipset which is just fine for web browsing (useful if you ever want to flash the BIOS on your video card BTW). The marginal cost of on-board graphics is probably around $5 to the manufacturer.

I find it somewhat surprising that neither of the major graphics manufacturers have tried to radically improve the power performance of their cards. There is, potentially, a major competitive advantage to be had. For example, if ATI was to spend $5 per card and drop the idle power requirement to 1/8th that of the Nvidia model, and advertise that fact and the estimated savings aggressively, they could recapture a lot of the market share they've ceded since the heyday of the Radeon 9700 Pro.

16 March 2008

Carbon Trading, Bubble Hysteria

In the past, I thought that carbon trading of the style proposed by the Kyoto treaty could be a positive way to affect change, both from the point of view of climate change and peak oil. I have gradually come to change my mind, and I now favour a vanilla carbon tax with no loop holes. My decision was largely made watching the fallout from the dot-com bust, and now the US mortgage security shenanigan's.

Anything that Wall Street can game to enrich themselves, they will game. These crony capitalists with their derivatives and good-old-boys compensation schemes are really the enemy of free market entrepreneurship. If you bought $50 puts on Bear Sterns on Monday (10Mar2008), you gained a lot of money, but no wealth was crated.

I fail to see any advantage in giving Wall Street access to the carbon market.

A lot of people suspect that the recent run-up in commodities is largely due to money flowing out of mortgage securities and into commodities. I am not convinced of this, due to a number of factors.

Past pump and dumps in commodities — such as nickel — can work because you can store an entire years worth of the world's nickel production in a single large warehouse. On the other hand, a day's worth of oil production is roughly a cube 300 m on each side. It's very difficult to take oil out of the system unless you are a national oil company.

Furthermore, demand remains remarkably inelastic. Predictions of any tipping point where demand suddenly falls off at some price-point haven't panned out. When oil is consumed, it's really gone.

In addition, a huge hunk of the recent run-up in crude oil prices is simply due to the devaluation of the US dollar. The proof is in the US dollar index. So yes, the US is getting hosed on their oil consumption but the majority of the world's consumption is pretty well hedged against this rise.

China even subsidizes the cost of oil to their citizen-consumers. They have to do something with their dollar reserves. So even if we see a lot of demand destruction for petroleum from the USA it's not clear if that will really hammer the price of oil back down to $80 for a sustained period. The twin inflationary and deflationary pressures currently at war between the US Federal Reserve and Wall Street respectively make that an extremely difficult call to make.

I know one thing for sure. I will never hire someone with an MBA on their resume.

This brings up another question, namely is there potential for a bubble in investment in the so-called 'Cleantech' sector?

The world economy is in a slow transition from fossil fuels to alternative sources of energy, true or false? If you answer "true," then your only reasonable explanation for a bubble would be that the alternatives are growing at an unsustainable rate relative to the increase in the price of fossil fuels.

Unlike say, Pets.com or granite counter tops, a wind turbine or photovoltaic power has intrinsic value. They produce electricity, which is a very high-quality form of energy. I can calculate the net present value of a set of photovoltaic panels to a rather high degree of accuracy (~10 %), merely by noting the climate in which they are installed and their age.

The gap between the cost of doing work with oil as your energy source compared to electricity continues to enlarge. Consider, with electricity at $0.09/kWh, natural gas futures at $10.00/MMbtu, and oil at $111/bbl, the value of switching to electrons may pay back quickly. Note: these numbers are changing as fast as I can type this article.

Energy

Currency

Energy Cost

(US$/GJ)

Energy to Work
Efficiency

Exergy Cost

($US/GJ)

Electricity

25.00

1.0

25.00

Natural Gas

9.50

0.4

23.70

Crude Oil

17.35

0.35

49.55


At this point, electrifying train tracks or heating your home with a heat pump looks really good going forward (natural gas isn't nearly as fungible as oil). Look at it this way, there are 153 million employed people in the US, and they consume 19.6 million barrels of oil a day. That's $14.20/day or $5190 a year per (money earning) person at current prices. That's a lot of Starbucks.

There is a potentially enormous sum of money to be made in weaning North America, Japan, and Europe off the oil habit. It's not going to be easy since there is still a massive amount fossil fuels in the Earth's crust. The saving grace of the alternative energy industry is that its costs will go down with time whereas fossil fuel companies will have to extract poorer and poorer quality resources and hence become more expensive.

Of course, not everyone involved in cleantech will be idealists. A number of companies will be formed with the express aim of relieving investors of their capital. These fraudsters will primarily aim at people conceited enough to believe that they understand science, but lack the actual formal education to evaluate what they are seeing in numerical terms. I'm looking at the dot-com millionaires here. Beware the Rube Goldberg machine, or the company with salaries a much higher proportion of their expenses than equipment.

I will say, from personal experience, doing research in a corporate environment where every line of research has to have an immediate application and money for equipment is tight isn't very efficient compared to government funded labs. Now the bureaucracy, well...

11 March 2008

Squestration in the Oil Sands

Soooo.... last year the federal government of Canada introduced a bunch of new environmental programs. This year, they threw a lot of that out the window. Now we have a new environmental program: legislating projects that produce large quantities of carbon dioxide to employ sequestration. These large sources are coal plants and oil sands developments. The obvious loophole for everyone to observe is that it only applies to projects started after 2011, and there's evidently no grandfathering.

I'm not sure I believe whether they Conservative government actually intends to go through with this. Afterall, they are a minority government and while the opposition has no stomach for a new election, they aren't likely to last until 2011. The proof will really be in the activity in the oil patch. If they all rush to start projects before 2011 and have nothing scheduled after that, then maybe the Conservatives are actually serious.

Another question that crosses my mind is the quantity of good sequestration locations in close proximity to the main oil sands patch by Fort McMurray. Alberta is, generally speaking, a big sedimentary basin but the Northeast portion of the province is somewhat different if my memory is correct.

Personally, I foresee the cost of sequestering 'dirty' fuel sources such as bitumen or bituminous coal being onerous. Alberta already has the highest electricity prices in the nation and prices can only accelerate with the introduction of sequestration.

15 February 2008

The Difference between Economics and Physics

In economics, one assumes people are rational. In physics, one tests assumptions.

Economics
Physics
Both graphs feature real data of complicated systems. Both data should be linear, according to theory.

14 February 2008

Low-power Control Electronics

Phase-change Memory
Intel has recently shipped a beta-test commercial phase-change RAM (hat tip: Fraser's Energy Blog). I refer readers to the Wikipedia article on PCRAM for background. Phase change RAM relies on the application of heat to transform a rare-earth glass from a glassy (amorphous) to crystalline state. This is the same process that is used in writing to DVDs, except that instead of applying heat by a laser, it's applied by an electronic current. In a DVD, the index of refraction of the material can be altered, while for PCRAM the resistivity of the material is the method for storing bits.

Aside: that the EETimes.com article I linked states that the role-out of this product was delayed due to Intel not being able to get a big loan for its new subsidary operation. Another result of the debt bubble I guess.

Phase-change RAM is interesting from an energy perspective in that it non-volatile, i.e. it requires no power to store data, unlike conventional RAM. You only need to apply power when you want to read or write from it. It's also, apparently, much faster and robust than flash memory. Some of the discussion around this technology suggests that memory access requires around 5 ns, which would make it comparable to DDR2-800MHz RAM with standard memory timings.

Intel is delivering a testbed 128 MB chip, which is obviously on the small side for the moment. Compare that to my 16 GB flash USB stick. As they move to a smaller process, the memory density will increase geometrically.

So reader may be wondering, "Nice, but what does this have to do with energy?" It's a simple fact that renewable power supplies like photovoltaics or wind turbines are going to require control electronics household appliances in order to control the demand side. The first step is to have net and smart metering, and then link that to items like the air conditioner and refrigerator in order to create a pool of deferrable demand for the power utilities.

Let's say you have a refrigerator burning 400 kWh per year, that's equivalent to a constant power draw of 45 W. Does it make any sense to add 40 W of control electronics in order to communicate with the utility and cycle the fridge on and off as the wind blows? Obviously not. Hence the need for control electronics that barely sip electrons. Non-volatile memory is a big part of this. The items to look for are for low-voltage processors either with radically scalable clock speeds or multi-core devices where a slow device turns on the faster one, and highly efficient AC to DC power supplies.

Actually migrating from our current vampire appliances to something brighter is going to take a long, long time if we are relying on electricity costs and consumer awareness to do the job. For one thing, many appliances don't come with labeling that indicates their power consumption, and they should so a green consumer can make an informed choice. Practically, government regulation of electronics is required to encourage all manufacturers to develop low-power electronics. The objective would be to ensure that the leaders in power thrifty electronics aren't put at a competitive disadvantage compared to their lower-tech competitors, who don't have to put up the investment into R&D. Mandates have had enormous success in improving the efficiency of air conditioners and refrigerators in recent years. Electronics is a sector that needs much of the same.

Negative Capacitance Transistors
On the topic of low voltage processors, Sayeef Salahuddin and Supriyo Datta proposed in Nano Letters a concept for radically reducing the operating voltage required to switch the gate in a transistor (warning: math skills and subscription required for access). By way of reminder, the power an electronic circuit draws is proportional to the current and square of the voltage,
P = I V
such that the voltage a circuit operates on matters a lot when determining power consumption, and how much heat needs to be extracted from it. Nominally, a field-effect transistor (FET) requires about 60 mV to switch. As electronics get smaller and smaller, the power density is increasing, which causes components to run hotter and hotter. The fundamental limitations to Moore's Law are not so much components getting down to the atomic scale but pulling the heat out before the chip melts.

What Salahuddin and Datta are proposing is to replace the standard insulting dielectric portion of the gate with a ferroelectric insulator (such as BaTiO3 that would exhibit 'negative capacitance' when placed in series with a regular capacitor. A negative capacitor is one where the charge stored decreases with increasing voltage. The authors show that such a device would basically act as a voltage transformer/amplifier, and allow the control signal that initiates switching to be much smaller.

This proposal is, at the moment, a theoretical construct. We will probably see one of the universities with extensive fabrication laboratories try to build an experimental device sooner rather than later, however.