Skip navigation

Category Archives: Clean energy

Plastics have become an integral part of our lives. Plastic  constitutes about 12% of Municipal solid wastes generated in USA,a sharp increase from just 1% in 1960 to the current level. Increasing usage of plastics have created  environmental issues such as increased energy and water usage, emission of greenhouse gases and finally waste disposal and health issues. Many countries are now trying to cut the waste disposal problems by reducing usage, recovering  fuels from plastics and recycling.However  a large quantity of plastics are still returned to landfills  creating long-term health problems.

“According to EPA :

  • 31 million tons of plastic waste were generated in 2010, representing 12.4 percent of total MSW.
  • In 2010, the United States generated almost 14 million tons of plastics as containers and packaging, almost 11 million tons as durable goods, such as appliances, and almost 7 million tons as non-durable goods, such as plates and cups.
  • Only 8 percent of the total plastic waste generated in 2010 was recovered for recycling.
  • In 2010, the category of plastics which includes bags, sacks, and wraps was recycled at almost 12 percent.
  • Plastics also are found in automobiles, but recycling of these materials is counted separately from the MSW recycling rate.

How Plastics Are Made

Plastics can be divided in to two major categories: thermosets and thermoplastics. A thermoset solidifies or “sets” irreversibly when heated. They are useful for their durability and strength, and are used primarily in automobiles and construction applications. Other uses are adhesives, inks, and coatings.

A thermoplastic softens when exposed to heat and returns to original condition at room temperature. Thermoplastics can easily be shaped and molded into products such as milk jugs, floor coverings, credit cards, and carpet fibers.

According to the American Chemistry Council, about 1,800 US businesses handle or reclaim post-consumer plastics. Plastics from MSW are usually collected from curbside recycling bins or drop-off sites. Then, they go to a material recovery facility, where the materials are sorted into broad categories (plastics, paper, glass, etc.). The resulting mixed plastics are sorted by plastic type, baled, and sent to a reclaiming facility. At the facility, any trash or dirt is sorted out, then the plastic is washed and ground into small flakes. A flotation tank then further separates contaminants, based on their different densities. Flakes are then dried, melted, filtered, and formed into pellets. The pellets are shipped to product manufacturing plants, where they are made into new plastic products.

Resin Identification Code

The resin identification coding system for plastic, represented by the numbers on the bottom of plastic containers, was introduced by SPI, the plastics industry trade association, in 1988. Municipal recycling programs traditionally target packaging containers, and the SPI coding system offered a way to identify the resin content of bottles and containers commonly found in the residential waste stream. Plastic household containers are usually marked with a number that indicates the type of plastic. Consumers can then use this information to determine whether certain plastic types are collected for recycling in their area. Contrary to common belief, just because a plastic product has the resin number in a triangle, which looks very similar to the recycling symbol, it does not mean it is collected for recycling.

SPI Resin Identification Code

1

2

3

4

5

6

7

Type of Resin Content

PET

HDPE

Vinyl

LDPE

PP

PS

OTHER

Markets for Recovered Plastics

Markets for some recycled plastic resins, such as PET and HDPE, are stable and even expanding in the United States. Currently, the US has the capacity to be recycling plastics at a greater rate. The capacity to process post-consumer plastics and the market demand for recovered plastic resin exceeds the amount of post-consumer plastics recovered from the waste stream. The primary market for recycled PET bottles continues to be fiber for carpet and textiles, while the primary market for recycled HDPE is bottles, according to the American Chemistry Council.

Looking forward, new end uses for recycled PET bottles might include coating for corrugated paper and other natural fibers to make waterproof products like shipping containers. PET can even be recycled into clothing, such as fleece jackets. Recovered HDPE can be manufactured into recycled-content landscape and garden products, such as lawn chairs and garden edging.

Source Reduction

Source reduction is the process of reducing the amount of waste that is generated. The plastics industry has successfully been able to reduce the amount of material needed to make packaging for consumer products. Plastic packaging is generally more lightweight than its alternatives, such as glass, paper, or metal. Lighter weight materials require less fuel to transport and result in less material in the waste stream.”

Source : EPA.

 

Bio-LNG (01)Bio-LNG (02) Bio-LNG (03) Bio-LNG (04) Bio-LNG (05) Bio-LNG (07) Bio-LNG(06) Bio-LNG (08) Bio-LNG (09) Bio-LNG (10) Bio-LNG (11)

A new concept known as “hydraulic fracturing “ to enhance the recovery of land fill gas from new and existing land fill sites have been tested jointly by a Dutch and  Canadian companies. They claim it is now possible to recover such gas economically and liquefy them into Bio-LNG to be used as a fuel for vehicles and to generate power.

Most biofuels around the world are now made from energy crops like wheat, maize, palm oil, rapeseed oil etc and only  a minor part is  made from waste. But such a practice in not sustainable in the long run considering the anticipated food shortage due to climate changes.   The EU wants to ban biofuels that use too much agricultural land and encourage production of biofuels that do not use food material but waste materials. Therefore there is a need to collect methane gas that is emitted by land fill sites more efficiently and economically and to compete with fossil fuels.

There are about 150,000 landfills in Europe with about 3–5 trillion cubic meters of waste (Haskoning 2011). All landfills emit landfill gas; the contribution of methane emissions from landfills is estimated to be between 30 and 70 million tons each year. Landfills contributed an estimated 450 to 650 billion cubic feet of methane per year (in 2000) in the USA. One can either flare landfill gas or make electricity with landfill gas. But it is prudent to produce the cleanest and cheapest liquid biofuel namely “Bio-LNG”.

Landfill gas generation: how do these bugs do their work?

Researchers had a hard time figuring out why landfills do not start out as a friendly environment for the organisms that produce methane. Now new research from North Carolina State University points to one species of microbe that is paving the way for other methane producers. The starting bug has been found. That opens the door to engineer better landfills with better production management. One can imagine a landfill with real economic prospects other than getting the trash out of sight. The NCSU researchers found that an anaerobic bacterium called Methanosarcina barkeri appears to be the key microbe. The following steps are involved in the formation of landfill gas is shown in the diagram

Phase 1: oxygen disappears, and nitrogen

Phase 2: hydrogen is produced and CO2 production increases rapidly.

Phase 3: methane production rises and CO2 production decreases.

Phase 4: methane production can rise till 60%.

Phases 1-3 typically last for 5-7 years.

Phase 4 can continue for decades, rate of decline depending on content.

Installation of landfill gas collection system

A quantity of wells is drilled; the wells are (inter) connected with a pipeline system. Gas is guided from the wells to a facility, where it is flared or burnt to generate electricity. A biogas engine exhibits 30-40% efficiency. Landfills often lack access to the grid and there is usually no use for the heat.

The alternative: make bio-LNG instead and transport the bio-LNG for use in heavy-duty vehicles and ships or applications where you can use all electricity and heat.

Bio-LNG: what is it?

Bio-LNG is liquid bio-methane (also: LBM). It is made from biogas. Biogas is produced by anaerobic digestion. All organic waste can rot and can produce biogas, the bacteria does the work. Therefore biogas is the cheapest and cleanest biofuel  that can be generated without competing  with food or land use. For the first time there is a biofuel, bio-LNG, a better quality fuel than fossil fuel.

The bio-LNG production process

Landfill gas is produced by anaerobic fermentation in the landfill. The aim is to produce a constant flow of biogas with high methane content. The biogas must be upgraded, i.e. removal of H2S, CO2 and trace elements;

In landfills also siloxanes, nitrogen and Cl/F gases. The bio-methane must be purified (maximum 25/50ppm CO2, no water) to prepare for liquefaction. The cold box liquefies pure biomethane to bio-LNG

Small scale bio-LNG production using smarter methods.

•Use upgrading modules that do not cost much energy.

•Membranes which can upgrade to 98-99.5 % methane are suitable.

•Use a method for advanced upgrading that is low on energy demand.

•Use a fluid / solid that is allowed to be dumped at the site.

•Use cold boxes that are easy to install and low on power demand.

•Use LNG tank trucks as storage and distribution units.

•See if co-produced CO2 can be sold and used in greenhouses or elsewhere.

•Look carefully at the history and present status of the landfill.

What was holding back more projects?

Most flows of landfill gas are small (hundreds of Nm3/hour), so economy of scale is generally not favorable. Technology in upgrading and liquefaction has evolved, but the investments for small flows during decades cannot be paid back.

Now there is a solution: enhanced gas recovery by hydraulic fracturing. Holland Innovation Team and Fracrite Environmental Ltd. (Canada) has developed a method to increase gas extraction from landfill 3-5 times.

Hydraulic fracturing increases landfill gas yield and therefore economy of scale for bio-LNG production

The method consists of a set of drilling from which at certain dept the landfill is hydraulically broken. This means a set of circular horizontal fractures are created from the well at preferred depths. Sand or other materials are injected into the fractures. Gas gathers from below in the created interlayer and flows into the drilled well. In this way a “guiding” circuit for landfill gas is created. With a 3-5 fold quantity of gas, economy of scale for bio-LNG production will be reached rapidly. Considering the multitude of landfills worldwide this hydraulic fracturing method in combination with containerized upgrading and liquefaction units offers huge potential. The method is cost effective, especially at virgin landfills, but also at landfill with decreasing amounts of landfill gas.

Landfill gas fracturing pilot (2009).

• Landfill operational from 1961-2005

• 3 gas turbines, only 1 or 2 in operation at any time due to low gas extraction rates

• Only 12 of 60 landfill gas extraction wells still producing methane

• Objective of pilot was to assess whether fracturing would enhance methane extraction rates

Field program and preliminary result

Two new wells drilled into municipal wastes and fractured (FW60, FW61). Sand Fractures at 6, 8, 10, 12 m depth in wastes with a fracture radius of 6 m. Balance gases believed to be due to oxygenation effects during leachate and

Groundwater pumping.

Note: this is entirely different from deep fracking in case of shale gas!

Conceptual Bioreactor Design

 The conceptual design is shown in the figures.There are anaerobic conditions below the groundwater table, but permeability decreases because of compaction of the waste. Permeability increases after fracking and so does the quantity of landfill gas and leachate.

Using the leachate by injecting this above the groundwater table will introduce anaerobic conditions in an area where up till then oxygen prevailed and so prevented landfill gas formation

It can also be done in such a systematic way, that all leachate which is extracted, will be disposed off in the shallow surrounding wells above the groundwater table.

One well below the groundwater table is fracked, the leachate is injected at the corners of a square around the deeper well. Sewage sludge and bacteria can be added to increase yield further

Improving the business case further

A 3-5 fold increased biogas flow will improve the business case due to increasing

Economy of scale. The method will also improve landfill quality and prepare the landfill for other uses.

When the landfill gas stream dries up after 5 years or so, the next landfill can be served by relocating the containerized modules (cold boxes and upgrading modules). The company is upgrading with a new method developed in-house, and improving landfill gas yield by fracking with smart materials. EC recommendations to count land fill gas quadrupled for renewable fuels target and the superior footprint of bio-LNG production from landfills are beneficial for immediate start-ups

Conclusions and recommendations

Landfills emit landfill gas. Landfill gas is a good source for production of bio-LNG. Upgrading and liquefaction techniques are developing fast and decreasing in price. Hydraulic fracturing can improve landfill gas yield such that economy of scale is reached sooner. Hydraulic fracturing can also introduce anaerobic conditions by injecting leachate, sewage sludge and bacteria above the groundwater table. The concept is optimized to extract most of the landfill gas in a period of five years and upgrade and liquefy this to bio-LNG in containerized modules.

Holland Innovation Team and Fracrite aim at a production price of less than €0.40 per kilo (€400/ton) of bio-LNG, which is now equivalent to LNG fossil prices in Europe and considerably lower than LNG prices in Asia, with a payback time of only a few years.

(Source:Holland Innovation Team)

 

Seawater desalination is a technology that provides drinking water for millions of people around the world. With increasing industrialization and water usage and lack of recycling or reuse, the demand for fresh water is increasing at the fastest rate. Industries such as power plants use bulk of water for cooling purpose and chemical industries use water for their processing. Agriculture is also a major user of water and   countries like India exploit ground water for this purpose. To supplement fresh water, Governments and industries in many parts of the world are now turning to desalinated seawater as a potential source of fresh water. However, desalination of seawater to generate fresh water is an expensive option, due to its large energy usage. However, due to frequent failure of monsoon rains and uncertainties and changing weather pattern due to global warming, seawater desalination is becoming a potential source of fresh water, despite its cost and environmental issues.

Seawater desalination technology has not undergone any major changes during the past three decades. Reverse osmosis is currently the most sought after technology for desalination due to increasing efficiencies of the membranes and energy-saving devices. In spite of all these improvements the biggest problem with desalination technologies is still the rate of recovery of fresh water. The best recovery in SWRO plants is about 50% of the input water. Higher recoveries create other problems such as scaling, higher energy requirements and O&M issues and many suppliers would like to restrict the recoveries to 35%, especially when they have to guarantee the life of membranes and the plant.

Seawater is nothing but fresh water with large quantities of dissolved salts. The concentration of total dissolved salts in seawater is about 35,000mgs/lit. Chemical industries such as Caustic soda and Soda ash plants use salt as the basic raw material. Salt is the backbone of chemical industries and number of downstream chemicals are manufactured from salt. Seawater is the major source of salt and most of these chemical industries make their own salt using solar evaporation of seawater using traditional methods with salt pans. Large area of land is required for this purpose and solar evaporation is a slow process and it takes months together to convert seawater into salt. It is also labor intensive under harsh conditions.

The author of this article has developed an innovative technology to generate fresh water as well as salt brine suitable for Caustic soda and Soda ash production. By using this novel process, one is able to recover almost 70% fresh water against only 40% fresh water recovered using conventional SWRO process, and also recover about 7- 9% saturated brine simultaneously. Chemical industries currently producing salt using solar evaporation are unable to meet their demand or expand their production due to lack of salt. The price of salt is steadily increasing due to supply demand gap and also due to uncertainties in weather pattern due to global warming. This result in increased cost of production and many small and medium producers of these chemicals are unable to compete with large industries. Moreover, countries like Australia who have vast arid land can produce large quantities of salt   with mechanized process  competitively; Australia is currently exporting salt to countries like Japan, while countries like India and China are unable to compete in the international market with their age-old salt pans using  manual labor. In solar evaporation the water is simply evaporated.

Currently these chemical industries use the solar salt which has a number of impurities, and it requires an elaborate purification process. Moreover the salt can be used as a raw material only in the form of saturated brine without any impurities. Any impurity is detrimental to the Electrolytic process where the salt brine is converted into Caustic soda and Soda ash. Chemical industries use deionized water to dissolve solar salt to make saturated brine and then purify them using number of chemicals before it can be used as a raw material for the production of Caustic soda or Soda ash. The cost of such purified brine is many times costlier than the raw salt. This in turn increase the cost of chemicals produced.

In this new process, seawater is pumped into the system where it is separated into 70% fresh water meeting WHO specifications for drinking purpose, and 7-10% saturated pure brine suitable for production of caustic soda and Soda ash. These chemical industries also use large quantities of process water for various purposes and they can use the above 70% water in their process. Only 15-20% of unutilized seawater is discharged back into the sea in this process, compared to 65% toxic discharge from convention desalination plants. This new technology is efficient and environmentally friendly and generates value added brine as a by-product. It is a win situation for the industries and the environment. The technology has been recently patented and is available for licensing on a non-exclusive or exclusive basis. The advantage of this technology is any Caustic soda or Soda ash plant located near the seashore can produce their salt brine directly from seawater without stock piling solar salt for months together or transporting over a long distance or importing from overseas.

Government and industries can join together to set up such plants where Governments can buy water for distribution and industries can use salt brine as raw material for their chemical production. Setting up a desalination plants only for supplying drinking water to the public is not a smart way to cut the cost of drinking water. For example, the Victorian Government in Australia has set up a large desalination plant to supply drinking water. This plant was set up by a foreign company on BOOT (build, own and operate basis) and water is sold to the Government on ‘take or pay’ basis. Currently the water storage level at catchment area is nearly 80% of its capacity and the Government is unlikely to use desalinated water for some years to come. However, the Government is legally bound by a contract to buy water or pay the contracted value, even if Government does not need water. Such contracts can be avoided in the future by Governments by joining with industries who require salt brine 24×7  throughout the year, thus mitigating the risk involved by  expensive legal contracts.

 

Energy storage systemsFlow batteryReversible fuelcell

The share of renewable energy is steadily increasing around the world. But storing such intermittent energy source and utilizing it when needed has been a challenge. In fact energy storage makes up a significant part of the cost in any renewable energy technology. Many storage technologies are now available in the commercial market, but choosing a right type of technology has always been a difficult choice. In this article we will consider four types of storage technologies. The California Energy Commission conducted economic and environmental analyses of four energy storage options for a wind energy project: (1) lead acid batteries, (2) zinc bromine (flow) batteries, (3) a hydrogen electrolyzer and fuel cell storage system, and (4) a Hydrogen storage option where the hydrogen was used for fueling hydrogen powered vehicle. Their conclusions were:

”Analysis with NREL’s (National Renewable Energy laboratory)  HOMER model showed that, in most cases, energy storage systems were not well used until higher levels of wind penetration were modeled (i.e., 18% penetration in Southern California in 2020). In our scenarios, hydrogen storage became more cost-effective than battery storage at higher levels of wind power production, and using the hydrogen to refuel vehicles was more economically attractive than converting the hydrogen to electricity. The overall value proposition for energy storage used in conjunction with intermittent renewable power sources depends on multiple factors. Our initial qualitative assessment found the various energy storage systems to be environmentally benign, except for emissions from the manufacture of some battery materials.

However, energy storage entails varying economic costs and environmental impacts depending on the specific location and type of generation involved, the energy storage technology used, and the other potential benefits that energy storage systems can provide (e.g., helping to optimize

Transmission and distribution systems, local power quality support, potential provision of spinning reserves and grid frequency regulation, etc.)”.

Key Assumptions

 

Key assumptions guiding this analysis include the following:

Wind power will expand in California under the statewide RPS program to a level of

approximately 10% of total energy provided in 2010 and 20% by 2020, with most of

this expansion in Southern California.

• Costs of flow battery systems are assumed to decline somewhat through 2020 and

costs of hydrogen technologies (electrolyzers, fuel cell systems, and storage systems)

are assumed to decline significantly through 2020.

• In the case where hydrogen is produced, stored, and then reconverted to electricity

using fuel cell systems, we assume that the hydrogen can be safely stored in

modified wind turbine towers at relatively low pressure at lower costs than more

conventional and higher-pressure storage.

• In the case where hydrogen is produced and sold into transportation markets, we

assume that there is demand for hydrogen for vehicles in 2010 and 2020, and that the

Hydrogen is produced at the refueling station using the electricity produced from

wind farms (in other words, we assume that transmission capacity is available for

this when needed)?

Key Project Findings

 

Key findings from the HOMER model projections and analysis include the following:

Energy storage systems deployed in the context of greater wind power development

were not particularly well utilized (based on the availability of “excess” off-peak

electricity from wind power), especially in the 2010 time frame (which assumed 10%

wind penetration statewide), but were better utilized–up to 1,600 hours of operation per

year in some cases–with the greater (20%) wind penetration levels assumed for 2020.

• The levelized costs of electricity from these energy storage systems ranged from a low of

$0.41 per kWh—or near the marginal cost of generation during peak demand times—to

many dollars per kWh (in cases where the storage was not well utilized). This suggests

that in order for these systems to be economically attractive, it may be necessary to

optimize their output to coincide with peak demand periods, and to identify additional

value streams from their use (e.g., transmission and distribution system optimization,

provision of power quality and grid ancillary services, etc.)

• At low levels of wind penetration (1%–2%), the electrolyzer/fuel cell system was either

inoperable or uneconomical (i.e., either no electricity was supplied by the energy storage

system or the electricity provided carried a high cost per MWh).

• In the 2010 scenarios, the flow battery system delivered the lowest cost per energy

stored and delivered.

• At higher levels of wind penetration, the hydrogen storage systems became more

economical such that with the wind penetration levels in 2020 (18% from Southern

California), the hydrogen systems delivered the least costly energy storage.

• Projected decreases in capital costs and maintenance requirements along with a more

durable fuel cell allowed the electrolyzer/fuel cell to gain a significant cost advantage

over the battery systems in 2020.

• Sizing the electrolyzer/fuel cell system to match the flow battery system’s relatively

high instantaneous power output was found to increase the competitiveness of this

system in low energy storage scenarios (2010 and Northern California in 2020), but in

scenarios with higher levels of energy storage (Southern California in 2020), the

Electrolyzer/fuel cell system sized to match the flow battery output became less

competitive.

• In our scenarios, the hydrogen production case was more economical than the

Electrolyzer/fuel cell case with the same amount of electricity consumed (i.e., hydrogen

production delivered greater revenue from hydrogen sales than the electrolyzer/fuel

cell avoided the cost of electricity, once the process efficiencies are considered).

• Furthermore, the hydrogen production system with a higher-capacity power converter

and electrolyzer (sized to match the flow battery converter) was more cost-effective than

the lower-capacity system that was sized to match the output of the solid-state battery.

This is due to economies of scale found to produce lower-cost hydrogen in all cases.

• In general, the energy storage systems themselves are fairly benign from an

environmental perspective, with the exception of emissions from the manufacture of

certain components (such as nickel, lead, cadmium, and vanadium for batteries). This is

particularly true outside of the U.S., where battery plant emissions are less tightly

controlled and potential contamination from improper disposal of these and other

materials are more likely. The overall value proposition for energy storage systems used in conjunction with intermittent renewable energy systems depends on diverse factors.

• The interaction of generation and storage system characteristics and grid and energy

resource conditions at a particular location.

• The potential use of energy storage for multiple purposes in addition to improving the

dependability of intermittent renewable (e.g., peak/off-peak power price arbitrage,

helping to optimize the transmission and distribution infrastructure, load-leveling the

grid in general, helping to mitigate power quality issues, etc.)

• The degree of future progress in improving forecasting techniques and reducing

prediction errors for intermittent renewable energy systems

• Electricity market design and rules for compensating renewable energy systems for their

output

Conclusions

 

“This study was intended to compare the characteristics of several technologies for providing

Energy storage for utility grids—in a general sense and also specifically for battery and

Hydrogen storage systems—in the context of greater wind power development in California.

While more detailed site-specific studies will be required to draw firm conclusions, we believe

those energy storage systems have relatively limited application potential at present but may

become of greater interest over the next several years, particularly for California and other areas

that is experiencing significant growth in wind power and other intermittent renewable.

Based on this study and others in the technical literature, we see a larger potential need for

energy storage system services in the 2015–2020 time frames, when growth in renewable produced electricity is expected to reach levels of 20%–30% of electrical energy supplied.

Depending on the success in improved wind forecasting techniques and electricity market

designs, the role for energy storage in the modern electricity grids of the future may be

significant. We suggest further and more comprehensive assessments of multiple energy

storage technologies for comparison purposes, and additional site- and technology-specific

project assessments to gain a better sense of the actual value propositions for these technologies

in the California energy system.

 

This project has helped to meet program objectives and to benefit California in the

Following ways:

Providing environmentally sound electricity. Energy storage systems have the

Potential to make environmentally attractive renewable energy systems more

competitive by improving their performance and mitigating some of the technical issues

associated with renewable energy/utility grid integration. This project has identified the

potential costs associated with the use of various energy storage technologies as a step

toward understanding the overall value proposition for energy storage as a means to

help enable further development of wind power (and potentially other intermittent

renewable resources as well).

Providing reliable electricity. The integration of energy storage with renewable energy

sources can help to maintain grid stability and adequate reserve margins, thereby

contributing to the overall reliability of the electricity grid. This study identified the

potential costs of integrating various types of energy storage with wind power, against

which the value of greater reliability can be assessed along with other potential benefits.

Providing affordable electricity. Upward pressure on natural gas prices, partly as a

function of increased demand, has significantly contributed to higher electricity prices in

California and other states. Diversification of electricity supplies with relatively low-cost

sources, such as wind power, can provide a hedge against further natural gas price

increases. Higher penetration of these other (non-natural-gas-based) electricity sources,

Potentially enabled by the use of energy storage, can reduce the risks of future electricity.”

(Source: California Energy Commission prepared by University of Berkeley).

The arctic ice cover is steadily shrinking over a period opening new polar shipping routes. Recently a Norwegian ship was carrying a LNG Carrierto Japan through Russia, marking the beginning of new polar shipping route. There was a short documentary film on disappearance of an entire Aral Sea from the map, due to evaporation, caused by construction of dams by Russian authorities restricting the flow of rivers into Aral Sea. These dramatic events are happening right in front of our eyes. Yet, there are many Governments and people around the world are still questioning whether Global warming is real and is it man-made? Well, people do not accept science when it come to global warming because it causes them much inconvenience and embarrassment for Governments. They do not want to face the reality but prefer to postpone it for another day. This is what happening with super powers and industrialized countries in the world. But how long can they sustain such skepticism and postpone urgent actions that are necessary to save the future generation of mankind?

• Arctic sea ice is projected to decline dramatically over the 21st century, with little late summer sea ice remaining by the year 2100.

• The simulated 21st century Arctic sea ice decline is not smooth, but has periods of large and small changes.

• The Arctic region responds sensitively to past and future global climate forcing, such as changes in atmospheric greenhouse gas levels. Its surface air temperature is projected to warm at a rate about twice as fast as the global average.

Attached  Sea ice concentrations simulated by GFDL’s CM2.1 global coupled climate model averaged over August, September and October (the months when Arctic sea ice concentrations generally are at a minimum). Three years (1885, 1985 & 2085) are shown to illustrate the model-simulated trend. A dramatic reduction of summertime sea ice is projected, with the rate of decrease being greatest during the 21st century part. The colors range from dark blue (ice-free) to white (100% sea ice covered).

“Satellite observations show that Arctic sea ice extent has declined over the past three decades [e.g., NOAA magazine, 2006]. Global climate model experiments, such as those conducted at NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL), project this downward trend to continue and perhaps accelerate during the 21st century.

The Arctic is a region that is projected to warm at about twice the rate of the global average [Winton, 2006a] – a phenomenon sometimes called “Arctic amplification”. As Arctic temperatures rise, sea ice melts—a change that in turn affects other aspects of global climate.

While beyond the scope of GFDL’s climate model simulations, other research suggests that Arctic sea ice changes can impact a broad range of factors — from altering key elements of the Arctic biosphere (plants and animals, marine and terrestrial, including polar bears and fish), to opening polar shipping routes, to shifting commercial fishing patterns, etc.

An Ice-Free Arctic in Summer

The three panel’s attachments are snapshots of how late summer Northern Hemisphere sea ice concentrations vary in time in a GFDL CM2.1 climate model simulation. The figures depict

Sea ice concentration – a measure of how much of the ocean area is covered by sea ice, and the climate model variable that is most similar to what a satellite observes.

By the late 21st century, the GFDL computer model experiments project that the Arctic becomes almost ice-free during the late summer. But during the long Arctic winters (not shown) the sea ice grows back, though thinner than is simulated for the 20th century. The rate at which the modeled 21st century Arctic warming and sea ice melting occurs is rapid compared to that seen in historical observations. Abrupt Arctic changes are of particular concern for human and ecosystem adaptations and are a subject of much current research (Winton 2006B).

The modeled summertime Arctic sea ice extent (the size of the area covered by sea ice) does not very smoothly in time, as a good deal of year-to-year variability superimposed on the downward trend. This can be seen in the graph to the right and also in animations found at www.gfdl.noaa.gov/research/climate/highlights.

By the end of the 21st century, the modeled summer sea ice extent usually is less than 20% of the simulated for 1981 to 2000. The Arctic sea ice results shown here are not unique to the GFDL climate model. Generally similar results are produced by computer models developed at several other international climate modeling centers. Though some uncertainties in model projections of future climate remain, results such as these, taken together with observations that document late 20th century Arctic sea ice shrinkage, make the Arctic a region that will continue to be studied and watched closely, as atmospheric greenhouse gas levels increase.

Climate implications of shrinking summer sea ice Melting sea ice can influence the climate through a process known as the ice-albedo feedback. Much of the sunlight reflected by sea ice returns to space and is unavailable to heat the climate system. As the sea ice melts, the surface darkens and absorbs more of this energy. This, in turn, can lead to greater melting. This is referred to as a “positive feedback loop” because an initial change (sea ice melting) triggers other responses in the system that eventually acts to enhance the original change (inducing more sea ice melting).

At GFDL, research has focused on the role of the ice-albedo feedback in the enhancing simulated Arctic warming and on the potential for this positive feedback loop to lead to abrupt changes [Winton, 2006a]. A somewhat complex picture has emerged that shows the ice-albedo feedback as a contributor, but not necessarily the dominant factor in determining why modeled Arctic surface air temperatures warm roughly twice as fast as the global average. It also has been found that, for the range of temperature increases likely to occur in the 21st century, the Arctic ice-albedo feedback adjusts smoothly as the model’s ice declines, by reducing the ice cover at progressively earlier times in the sunlit season. This smooth adjustment maintains a fairly constant amplification of Arctic temperature change on global average warming.

The details of how Arctic feedback processes act in climate models at various modeling centers differ, and so analysis and computer model development work continues to better understand and to cut uncertainties in Arctic climate change simulations.”

While many scientists are alarmed by the widening expanse of open water in the Arctic, blaming it on global warming, shippers see a new international route. The MV Nordic Barents is lugging 40,000 tonnes of iron ore from Norway to China on a shortcut through melting ice – and is making a little history in the process. It is the first non-Russian commercial vessel to attempt a non-stop crossing of a route that skirts the receding Arctic ice cap.

Business Times, Singapore report (6 September 2010):

The MV Nordic Barents is lugging 40,000 tonnes of iron ore from Norway to China on an Arctic Ocean shortcut through melting ice – and is making a little history in the process.

Steaming east along Russia’s desolate northern coast, the ship departed on Saturday as the first non-Russian commercial vessel to attempt a non-stop crossing of a route that skirts the receding Arctic ice cap.

‘We’re pretty much going over the top,’ said John Sanderson, the Australian CEO of the Norwegian mine where the iron ore comes from.

By using the northern route from Europe to Asia, the Nordic Barents could save eight days and 5,000 nautical miles of travel thought to be worth hundreds of thousands of dollars to the owners of its cargo.

While many scientists are alarmed by the widening expanse of open water that the ship will traverse, blaming it on global warming, shippers see a new international route.

Sanderson’s ASX-listed Northern Iron Ltd has sent 15 ships to China since it began mining in the northern Norwegian town of Kirkenes last October. All steamed south, then east through the Suez Canal or around the Cape of Good Hope.

To reach Chinese steel mills hungry for ore, they had to brave pirates in the Indian Ocean.

The Arctic route is no picnic either. On Saturday the polar ice sheet remained almost as big as the US mainland. But over the summer it has shrunk about as far from the Russian coast as it did during the biggest Arctic melt on record, in 2007, according to the Nansen Environmental and Remote Sensing Center.

And the Russians are waking up to the business potential of a route that was mostly reserved for domestic commercial vessels in the past.

‘Suddenly there is an opening that gives this part of the world an advantage,’ said Felix H Tschudi, whose shipping company is Northern Iron’s largest shareholder.

Willy Oestreng, chairman of research group Ocean Futures, called the trip of the Nordic Barents ‘historic’.

‘The western world is starting to show an interest and a capability to use that route,’ he said.

Two days after Russia and Norway agreed last April to settle a 40-year-old dispute over economic zones in the Barents Sea, government and business leaders of the two countries met in Kirkenes to sweep away hurdles to international shipping.

Russian law still requires icebreaker escort even where ice danger is small, due to a lack of onshore mechanical or medical support. But fees and rules are starting to loosen.

‘Russian companies and Russian authorities are now ready to assist,’ said Mikhail Belkin, assistant general manager of the state-owned Rosatomflot ice breaking fleet.

Lots of Russian vessels have plied the passage, and two German ships traversed it last year with small cargos delivered to Russian ports. But the Nordic Barents, an ice-class Danish bulk carrier chartered by Tschudi, is the first non-Russian ship with permission to pass without stopping.

Rosatomflot has assigned two 75,000-horsepower icebreakers to the vessel for about 10 days of the three-week voyage.

Tschudi won’t say how much Rosatomflot is charging but praised it as ‘cooperative, service-minded and pragmatic.’

‘Today the route is basically competitive with the Suez Canal, and we can subtract the piracy risk,’ he said.

Excluding icebreaking fees, a bulk ship that takes the Arctic route from Hamburg to Yokohama can save more than US$200,000 in fuel and canal expenses, Mr. Oestreng said. — Reuter.

Disappearance of Aral Sea from he map.

“In the 1960s, the Soviet Union undertook a major water diversion project on the arid plains of Kazakhstan, Uzbekistan, and Turkmenistan. The region’s two major rivers, fed by snowmelt and precipitation in faraway mountains, were used to transform the desert into farms for cotton and other crops. Before the project, the Syr Darya and the Amu Darya rivers flowed down from the mountains, cut northwest through the Kyzylkum Desert, and finally pooled together in the lowest part of the basin. The lake they made, the Aral Sea, was once the fourth largest in the world.

Although irrigation made the desert bloom, it devastated the Aral Sea. This series of images from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite documents the changes. At the start of the series in 2000, the lake was already a fraction of its 1960 extent (black line). The Northern Aral Sea (sometimes called the Small Aral Sea) had separated from the Southern (Large) Aral Sea. The Southern Aral Sea had split into eastern and western lobes that remained tenuously connected at both ends.

By 2001, the southern connection had been severed, and the shallower eastern part retreated rapidly over the next several years. Especially large retreats in the eastern lobe of the Southern Sea appear to have occurred between 2005 and 2009, when drought limited and then cut off the flow of the Amu Darya. Water levels then fluctuated annually between 2009 and 2012 in alternately dry and wet years.

As the lake dried up, fisheries and the communities that depended on them collapsed. The increasingly salty water became polluted with fertilizer and pesticides. The blowing dust from the exposed lakebed, contaminated with agricultural chemicals, became a public health hazard. The salty dust blew off the lakebed and settled onto fields, degrading the soil. Croplands had to be flushed with larger and larger volumes of river water. The loss of the moderating influence of such a large body of water made winters colder and summers hotter and drier.

In a last-ditch effort to save some of the lake, Kazakhstan built a dam between the northern and southern parts of the Aral Sea. Completed in 2005, the dam was basically a death sentence for the southern Aral Sea, which was judged to be beyond saving. All of the water flowing into the desert basin from the Syr Darya now stays in the Northern Aral Sea. Between 2005 and 2006, the water levels in that part of the lake rebounded significantly and very small increases are visible throughout the rest of the time period. The differences in water color are due to changes in sediment.”

The recent debate between the presidential nominees in US election has revealed their respective positions on their policies for an energy independent America. Each of them have articulated how they will increase the oil and gas production to make America energy independent, which will  also incidentally create number of jobs in an ailing economy. Each one of them will be spending a billion dollar first, in driving their messages to the voting public. Once elected, they will explore oil and gas aggressively that will make America energy independent. They will also explore solar and wind energy potentials simultaneously to bridge any shortfall. Their policies   seem to be unconcerned with global warming and its impact due to emission of GHG but, rather aggressive in making America an energy independent by generating an unabated emission of GHG in the future. Does it mean an ‘energy independent America’ will spell a doom to the world including US?

The best option for America to become energy independent will be to focus  on energy efficiency of existing technologies and systems, combining renewable fossil fuel energy mix, base load renewable  power and storage technologies, substituting Gasoline with Hydrogen using renewable energy sources. The future investment should be based on sustainable renewable energy sources than fossil fuel. But current financial and unemployment situation in US will force the new president to increase the conventional and unconventional oil and gas production than renewable energy production, which will be initially expensive with long pay pack periods but will eventually meet the energy need in a sustainable way. The net result of their current policies will be an enhanced emission of GHG and acceleration of global warming. But the energy projections in the U.S. Energy Information Administration’s (EIA’s) Annual Energy Outlook 2012 (AEO2012) projects a reduced GHG emission.

According to Annual Energy Outlook 2012 report:

“The projections in the U.S. Energy Information Administration’s (EIA’s) Annual Energy Outlook 2012 (AEO2012) focus on the factors that shape the U.S. energy system over the long-term. Under the assumption that current laws and regulations remain unchanged throughout the projections, the AEO2012 Reference case provides the basis for examination and discussion of energy production, consumption, technology, and market trends and the direction they may take in the future. It also serves as a starting point for analysis of potential changes in energy policies. But AEO2012 is not limited to the Reference case. It also includes 29 alternative cases, which explore important areas of uncertainty for markets, technologies, and policies in the U.S. energy economy. Many of the implications of the alternative cases are discussed in the “Issues in focus” section of this report.

Key results highlighted in AEO2012 include continued modest growth in demand for energy over the next 25 years and increased domestic crude oil and natural gas production, largely driven by rising production from tight oil and shale resources. As a result, U.S. reliance on imported oil is reduced; domestic production of natural gas exceeds consumption, allowing for net exports; a growing share of U.S. electric power generation is met with natural gas and renewable; and energy-related carbon dioxide emissions stay below their 2005 level from 2010 to 2035, even in the absence of new Federal policies designed to mitigate greenhouse gas (GHG) emissions.

The rate of growth in energy use slows over the projection period, reflecting moderate population growth, an extended economic recovery, and increasing energy efficiency in end-use applications.

 

Overall U.S. energy consumption grows at an average annual rate of 0.3 percent from 2010 through 2035 in the AEO2012 Reference case. The U.S. does not return to the levels of energy demand growth experienced in the 20 years before the 2008- 2009 recession, because of more moderate projected economic growth and population growth, coupled with increasing levels of energy efficiency. For some end uses, current Federal and State energy requirements and incentives play a continuing role in requiring more efficient technologies. Projected energy demand for transportation grows at an annual rate of 0.1 percent from 2010 through 2035 in the Reference case, and electricity demand grows by 0.7 percent per year, primarily as a result of rising energy consumption in the buildings sector. Energy consumption per capita declines by an average of 0.6 percent per year from 2010 to 2035 (Figure 1). The energy intensity of the U.S. economy, measured as primary energy use in British thermal units (Btu) per dollar of gross domestic product (GDP) in 2005 dollars, declines by an average of 2.1 percent per year from 2010 to 2035. New Federal and State policies could lead to further reductions in energy consumption. The potential impact of technology change and the proposed vehicle fuel efficiency standards on energy consumption are discussed in “Issues in focus.”

Domestic crude oil production increases

Domestic crude oil production has increased over the past few years, reversing a decline that began in 1986. U.S. crude oil production increased from 5.0 million barrels per day in 2008 to 5.5 million barrels per day in 2010. Over the next 10 years, continued development of tight oil, in combination with the ongoing development of offshore resources in the Gulf of Mexico, pushes domestic crude oil production higher. Because the technology advances that have provided for recent increases in supply are still in the early stages of development, future U.S. crude oil production could vary significantly, depending on the outcomes of key uncertainties related to well placement and recovery rates. Those uncertainties are highlighted in this Annual Energy Outlook’s “Issues in focus” section, which includes an article examining impacts of uncertainty about current estimates of the crude oil and natural gas resources. The AEO2012 projections considering variations in these variables show total U.S. crude oil production in 2035 ranging from 5.5 million barrels per day to 7.8 million barrels per day, and projections for U.S. tight oil production from eight selected plays in 2035 ranging from 0.7 million barrels per day to 2.8 million barrels per day (Figure 2).

With modest economic growth, increased efficiency, growing domestic production, and continued adoption of nonpetroleum liquids, net imports of petroleum and other liquids make up a smaller share of total U.S. energy consumption

U.S. dependence on imported petroleum and other liquids declines in the AEO2012 Reference case, primarily as a result of rising energy prices; growth in domestic crude oil production to more than 1 million barrels per day above 2010 levels in 2020; an increase of 1.2 million barrels per day crude oil equivalent from 2010 to 2035 in the use of biofuels, much of which is produced domestically; and slower growth of energy consumption in the transportation sector as a result of existing corporate average fuel economy standards. Proposed fuel economy standards covering vehicle model years (MY) 2017 through 2025 that are not included in the Reference case would further cut projected need for liquid imports.

Although U.S. consumption of petroleum and other liquid fuels continues to grow through 2035 in the Reference case, the reliance on imports of petroleum and other liquids as a share of total consumption decline. Total U.S. consumption of petroleum and other liquids, including both fossil fuels and biofuels, rises from 19.2 million barrels per day in 2010 to 19.9 million barrels per day in 2035 in the Reference case. The net import share of domestic consumption, which reached 60 percent in 2005 and 2006 before falling to 49 percent in 2010, continues falling in the Reference case to 36 percent in 2035 (Figure 3). Proposed light-duty vehicles (LDV) fuel economy standards covering vehicle MY 2017 through 2025, which are not included in the Reference case, could further reduce demand for petroleum and other liquids and the need for imports, and increased supplies from U.S. tight oil deposits could also significantly decrease the need for imports, as discussed in more detail in “Issues in focus.”

Natural gas production increases throughout the projection period, allowing the United States to transition from a net importer to a net exporter of natural gas

Much of the growth in natural gas production in the AEO2012 Reference case results from the application of recent technological advances and continued drilling in shale plays with high concentrations of natural gas liquids and crude oil, which have a higher value than dry natural gas in energy equivalent terms. Shale gas production increases in the Reference case from 5.0 trillion cubic feet per year in 2010 (23 percent of total U.S. dry gas production) to 13.6 trillion cubic feet per year in 2035 (49 percent of total U.S. dry gas production). As with tight oil, when looking forward to 2035, there are unresolved uncertainties surrounding the technological advances that have made shale gas production a reality. The potential impact of those uncertainties results in a range of outcomes for U.S. shale gas production from 9.7 to 20.5 trillion cubic feet per year when looking forward to 2035.

As a result of the projected growth in production, U.S. natural gas production exceeds consumption early in the next decade in the Reference case (Figure 4). The outlook reflects increased use of liquefied natural gas in markets outside North America, strong growth in domestic natural gas production, reduced pipeline imports and increased pipeline exports, and relatively low natural gas prices in the United States.

Power generation from renewable and natural gas continues to increase

In the Reference case, the natural gas share of electric power generation increases from 24 percent in 2010 to 28 percent in 2035, while the renewable share grows from 10 percent to 15 percent. In contrast, the share of generation from coal-fired power plants declines. The historical reliance on coal-fired power plants in the U.S. electric power sector has begun to wane in recent years.

Over the next 25 years, the share of electricity generation from coal falls to 38 percent, well below the 48-percent share seen as recently as 2008, due to slow growth in electricity demand, increased competition from natural gas and renewable generation, and the need to comply with new environmental regulations. Although the current trend toward increased use of natural gas and renewable appears fairly robust, there is uncertainty about the factors influencing the fuel mix for electricity generation. AEO2012 includes several cases examining the impacts on coal-fired plant generation and retirements resulting from different paths for electricity demand growth, coal and natural gas prices, and compliance with upcoming environmental rules.

While the Reference case projects 49 gigawatts of coal-fired generation retirements over the 2011 to 2035 period, nearly all of which occurs over the next 10 years, the range for cumulative retirements of coal-fired power plants over the projection period varies considerably across the alternative cases (Figure 5), from a low of 34 gigawatts (11 percent of the coal-fired generator fleet) to a high of 70 gigawatts (22 percent of the fleet). The high-end of the range is based on much lower natural gas prices than those assumed in the Reference case; the lower end of the range is based on stronger economic growth, leading to stronger growth in electricity demand and higher natural gas prices. Other alternative cases, with varying assumptions about coal prices and the length of the period over which environmental compliance costs will be recovered, but no assumption of new policies to limit GHG emissions from existing plants, also yield cumulative retirements within a range of 34 to 70 gigawatts. Retirements of coal-fired capacity exceed the high-end of the range (70 gigawatts) when a significant GHG policy is assumed (for further description of the cases and results, see “Issues in focus”).

Total energy-related emissions of carbon dioxide in the United States stay below their 2005 level through 2035

Energy-related carbon dioxide (CO2) emissions grow slowly in the AEO2012 Reference case, due to a combination of modest economic growth, growing use of renewable technologies and fuels, efficiency improvements, slow growth in electricity demand, and increased use of natural gas, which is less carbon-intensive than other fossil fuels. In the Reference case, which assumes no explicit Federal regulations to limit GHG emissions beyond vehicle GHG standards (although State programs and renewable portfolio standards are included), energy-related CO2 emissions grow by just over 2 percent from 2010 to 2035, to a total of 5,758 million metric tons in 2035 (Figure 6). CO2 emissions in 2020 in the Reference case are more than 9 percent below the 2005 level of 5,996 million metric tons, and they still are below the 2005 level at the end of the projection period. Emissions per capita fall by an average of 1.0 percent per year from 2005 to 2035.

Projections for CO2 emissions are sensitive to such economic and regulatory factors due to the pervasiveness of fossil fuel use in the economy. These linkages result in a range of potential GHG emissions scenarios. In the AEO2012 Low and High Economic Growth cases, projections for total primary energy consumption in 2035 are, respectively, 100.0 quadrillion Btu (6.4 percent below the Reference case) and 114.4 quadrillion Btu (7.0 percent above the Reference case), and projections for energy-related CO2 emissions in 2035 are 5,356 million metric tons (7.0 percent below the Reference case) and 6,117 million metric tons (6.2 percent above the Reference case)”.  (Ref:U.S. Energy Information Administration).

The recent debate between the presidential nominees in US election has revealed their respective positions on their policies for an energy independent America. Each of them have articulated how they will increase the oil and gas production to make America energy independent, which will  also incidentally create number of jobs in an ailing economy. Each one of them will be spending a billion dollar first, in driving their messages to the voting public. Once elected, they will explore oil and gas aggressively that will make America energy independent. They will also explore solar and wind energy potentials simultaneously to bridge any shortfall. Their policies   seem to be unconcerned with global warming and its impact due to emission of GHG but, rather aggressive in making America an energy independent by generating an unabated emission of GHG in the future. Does it mean an ‘energy independent America’ will spell a doom to the world including US?

The best option for America to become energy independent will be to focus  on energy efficiency of existing technologies and systems, combining renewable fossil fuel energy mix, base load renewable  power and storage technologies, substituting Gasoline with Hydrogen using renewable energy sources. The future investment should be based on sustainable renewable energy sources than fossil fuel. But current financial and unemployment situation in US will force the new president to increase the conventional and unconventional oil and gas production than renewable energy production, which will be initially expensive with long pay pack periods but will eventually meet the energy need in a sustainable way. The net result of their current policies will be an enhanced emission of GHG and acceleration of global warming. But the energy projections in the U.S. Energy Information Administration’s (EIA’s) Annual Energy Outlook 2012 (AEO2012) projects a reduced GHG emission.

According to Annual Energy Outlook 2012 report:

“The projections in the U.S. Energy Information Administration’s (EIA’s) Annual Energy Outlook 2012 (AEO2012) focus on the factors that shape the U.S. energy system over the long-term. Under the assumption that current laws and regulations remain unchanged throughout the projections, the AEO2012 Reference case provides the basis for examination and discussion of energy production, consumption, technology, and market trends and the direction they may take in the future. It also serves as a starting point for analysis of potential changes in energy policies. But AEO2012 is not limited to the Reference case. It also includes 29 alternative cases, which explore important areas of uncertainty for markets, technologies, and policies in the U.S. energy economy. Many of the implications of the alternative cases are discussed in the “Issues in focus” section of this report.

Key results highlighted in AEO2012 include continued modest growth in demand for energy over the next 25 years and increased domestic crude oil and natural gas production, largely driven by rising production from tight oil and shale resources. As a result, U.S. reliance on imported oil is reduced; domestic production of natural gas exceeds consumption, allowing for net exports; a growing share of U.S. electric power generation is met with natural gas and renewable; and energy-related carbon dioxide emissions stay below their 2005 level from 2010 to 2035, even in the absence of new Federal policies designed to mitigate greenhouse gas (GHG) emissions.

The rate of growth in energy use slows over the projection period, reflecting moderate population growth, an extended economic recovery, and increasing energy efficiency in end-use applications.

 

Overall U.S. energy consumption grows at an average annual rate of 0.3 percent from 2010 through 2035 in the AEO2012 Reference case. The U.S. does not return to the levels of energy demand growth experienced in the 20 years before the 2008- 2009 recession, because of more moderate projected economic growth and population growth, coupled with increasing levels of energy efficiency. For some end uses, current Federal and State energy requirements and incentives play a continuing role in requiring more efficient technologies. Projected energy demand for transportation grows at an annual rate of 0.1 percent from 2010 through 2035 in the Reference case, and electricity demand grows by 0.7 percent per year, primarily as a result of rising energy consumption in the buildings sector. Energy consumption per capita declines by an average of 0.6 percent per year from 2010 to 2035 (Figure 1). The energy intensity of the U.S. economy, measured as primary energy use in British thermal units (Btu) per dollar of gross domestic product (GDP) in 2005 dollars, declines by an average of 2.1 percent per year from 2010 to 2035. New Federal and State policies could lead to further reductions in energy consumption. The potential impact of technology change and the proposed vehicle fuel efficiency standards on energy consumption are discussed in “Issues in focus.”

Domestic crude oil production increases

Domestic crude oil production has increased over the past few years, reversing a decline that began in 1986. U.S. crude oil production increased from 5.0 million barrels per day in 2008 to 5.5 million barrels per day in 2010. Over the next 10 years, continued development of tight oil, in combination with the ongoing development of offshore resources in the Gulf of Mexico, pushes domestic crude oil production higher. Because the technology advances that have provided for recent increases in supply are still in the early stages of development, future U.S. crude oil production could vary significantly, depending on the outcomes of key uncertainties related to well placement and recovery rates. Those uncertainties are highlighted in this Annual Energy Outlook’s “Issues in focus” section, which includes an article examining impacts of uncertainty about current estimates of the crude oil and natural gas resources. The AEO2012 projections considering variations in these variables show total U.S. crude oil production in 2035 ranging from 5.5 million barrels per day to 7.8 million barrels per day, and projections for U.S. tight oil production from eight selected plays in 2035 ranging from 0.7 million barrels per day to 2.8 million barrels per day (Figure 2).

With modest economic growth, increased efficiency, growing domestic production, and continued adoption of nonpetroleum liquids, net imports of petroleum and other liquids make up a smaller share of total U.S. energy consumption

U.S. dependence on imported petroleum and other liquids declines in the AEO2012 Reference case, primarily as a result of rising energy prices; growth in domestic crude oil production to more than 1 million barrels per day above 2010 levels in 2020; an increase of 1.2 million barrels per day crude oil equivalent from 2010 to 2035 in the use of biofuels, much of which is produced domestically; and slower growth of energy consumption in the transportation sector as a result of existing corporate average fuel economy standards. Proposed fuel economy standards covering vehicle model years (MY) 2017 through 2025 that are not included in the Reference case would further cut projected need for liquid imports.

Although U.S. consumption of petroleum and other liquid fuels continues to grow through 2035 in the Reference case, the reliance on imports of petroleum and other liquids as a share of total consumption decline. Total U.S. consumption of petroleum and other liquids, including both fossil fuels and biofuels, rises from 19.2 million barrels per day in 2010 to 19.9 million barrels per day in 2035 in the Reference case. The net import share of domestic consumption, which reached 60 percent in 2005 and 2006 before falling to 49 percent in 2010, continues falling in the Reference case to 36 percent in 2035 (Figure 3). Proposed light-duty vehicles (LDV) fuel economy standards covering vehicle MY 2017 through 2025, which are not included in the Reference case, could further reduce demand for petroleum and other liquids and the need for imports, and increased supplies from U.S. tight oil deposits could also significantly decrease the need for imports, as discussed in more detail in “Issues in focus.”

Natural gas production increases throughout the projection period, allowing the United States to transition from a net importer to a net exporter of natural gas

Much of the growth in natural gas production in the AEO2012 Reference case results from the application of recent technological advances and continued drilling in shale plays with high concentrations of natural gas liquids and crude oil, which have a higher value than dry natural gas in energy equivalent terms. Shale gas production increases in the Reference case from 5.0 trillion cubic feet per year in 2010 (23 percent of total U.S. dry gas production) to 13.6 trillion cubic feet per year in 2035 (49 percent of total U.S. dry gas production). As with tight oil, when looking forward to 2035, there are unresolved uncertainties surrounding the technological advances that have made shale gas production a reality. The potential impact of those uncertainties results in a range of outcomes for U.S. shale gas production from 9.7 to 20.5 trillion cubic feet per year when looking forward to 2035.

As a result of the projected growth in production, U.S. natural gas production exceeds consumption early in the next decade in the Reference case (Figure 4). The outlook reflects increased use of liquefied natural gas in markets outside North America, strong growth in domestic natural gas production, reduced pipeline imports and increased pipeline exports, and relatively low natural gas prices in the United States.

Power generation from renewable and natural gas continues to increase

In the Reference case, the natural gas share of electric power generation increases from 24 percent in 2010 to 28 percent in 2035, while the renewable share grows from 10 percent to 15 percent. In contrast, the share of generation from coal-fired power plants declines. The historical reliance on coal-fired power plants in the U.S. electric power sector has begun to wane in recent years.

Over the next 25 years, the share of electricity generation from coal falls to 38 percent, well below the 48-percent share seen as recently as 2008, due to slow growth in electricity demand, increased competition from natural gas and renewable generation, and the need to comply with new environmental regulations. Although the current trend toward increased use of natural gas and renewable appears fairly robust, there is uncertainty about the factors influencing the fuel mix for electricity generation. AEO2012 includes several cases examining the impacts on coal-fired plant generation and retirements resulting from different paths for electricity demand growth, coal and natural gas prices, and compliance with upcoming environmental rules.

While the Reference case projects 49 gigawatts of coal-fired generation retirements over the 2011 to 2035 period, nearly all of which occurs over the next 10 years, the range for cumulative retirements of coal-fired power plants over the projection period varies considerably across the alternative cases (Figure 5), from a low of 34 gigawatts (11 percent of the coal-fired generator fleet) to a high of 70 gigawatts (22 percent of the fleet). The high-end of the range is based on much lower natural gas prices than those assumed in the Reference case; the lower end of the range is based on stronger economic growth, leading to stronger growth in electricity demand and higher natural gas prices. Other alternative cases, with varying assumptions about coal prices and the length of the period over which environmental compliance costs will be recovered, but no assumption of new policies to limit GHG emissions from existing plants, also yield cumulative retirements within a range of 34 to 70 gigawatts. Retirements of coal-fired capacity exceed the high-end of the range (70 gigawatts) when a significant GHG policy is assumed (for further description of the cases and results, see “Issues in focus”).

Total energy-related emissions of carbon dioxide in the United States stay below their 2005 level through 2035

Energy-related carbon dioxide (CO2) emissions grow slowly in the AEO2012 Reference case, due to a combination of modest economic growth, growing use of renewable technologies and fuels, efficiency improvements, slow growth in electricity demand, and increased use of natural gas, which is less carbon-intensive than other fossil fuels. In the Reference case, which assumes no explicit Federal regulations to limit GHG emissions beyond vehicle GHG standards (although State programs and renewable portfolio standards are included), energy-related CO2 emissions grow by just over 2 percent from 2010 to 2035, to a total of 5,758 million metric tons in 2035 (Figure 6). CO2 emissions in 2020 in the Reference case are more than 9 percent below the 2005 level of 5,996 million metric tons, and they still are below the 2005 level at the end of the projection period. Emissions per capita fall by an average of 1.0 percent per year from 2005 to 2035.

Projections for CO2 emissions are sensitive to such economic and regulatory factors due to the pervasiveness of fossil fuel use in the economy. These linkages result in a range of potential GHG emissions scenarios. In the AEO2012 Low and High Economic Growth cases, projections for total primary energy consumption in 2035 are, respectively, 100.0 quadrillion Btu (6.4 percent below the Reference case) and 114.4 quadrillion Btu (7.0 percent above the Reference case), and projections for energy-related CO2 emissions in 2035 are 5,356 million metric tons (7.0 percent below the Reference case) and 6,117 million metric tons (6.2 percent above the Reference case)”.  (Ref:U.S. Energy Information Administration).

 

The largest power outage that affected 650 million people in India recently was major news around the world. Power outage is common in many countries including industrialized countries during the times of natural disasters such as cyclones, typhoons and flooding. But the power outage that happened in India was purely man-made. It was not just an accident but a culmination of series of failures as the result of many years of negligence, incompetency and wrong policies. Supplying an uninterrupted power for a democratic country like India with 1.2 billion people with 5-8% annual economic growth, mostly run by Governments of various political parties in various states is by no means an easy task. While one can understand the complexities of the problems involved in power generation and distribution, there are certain fundamental rules that can be followed to avoid such recurrence.

The supply and demand gap for power in India is increasing at an accelerated rate due to economic growth but the power generation and distribution capacity do not match this growth. Most of the power infrastructures in India are owned by Governments who control the power generation, distribution, operation and maintenance, financing power projects, supplying power generation equipment, supplying consumables, supplying fuel, transportation of fuel and revenue collection. The entire system is based on the policy of ‘socialistic democracy’, after the independence from the British, though economic liberalization and globalization is relatively a new phenomenon in India. Since every department of power infrastructure is controlled by Government, there is a lack of accountability and competition. Many private companies and foreign companies do not take part in tendering process because it is a futile exercise. Some smart multinational companies set up their manufacturing facilities in India, often in collaboration with Governments to get an entry into one of the largest market in the world. Indigenous Coal is the dominant fuel widely used for power generation though the quality of coal is very low, with ash content as high as 30%.The calorific value of such coal hardly exceeds 3000 kcal/kg, which means more quantity of coal  is required than any other fuel to generate same amount of power. Such coal generates not only low power but also generates a huge amount of ‘fly ash’ (the ash content is the coal comes out as fly ash) causing pollution and waste disposal problems. Large piles of fly ash and age-old cooling towers with a large pool of stagnant water are common sights in many power plants in India. Such low-cost coal does not make any economic sense when considering the amount of fly ash disposal cost and environmental damages. Thanks to research institutions that have developed methods to utilize fly ash in production of Portland cement. The indigenous low-grade coal is the fuel of choice by Indian power industries, though many plants have started importing coal recently from Indonesia and South Africa. Indigenous low-grade coal and cooling water from rivers and underground sources are two major pollutants in India. Water is allocated for power plants at the cost of agriculture. There is a shortage of drinking water in many cities as well as irrigation water for agriculture.

Since most of the power infrastructures are owned by Governments there is a tendency to adopt populace policies  such as power subsidies, free water and power for farmers, low power tariffs etc, making such projects economically unviable in the long run. Most of the State Electricity boards in India are running at a loss and such accumulated losses amounts to staggering figures. The Central electricity authority regulates the power tariff. They calculate the cost of power generation based on specific fuel and fix the power tariff that companies can charge their consumers even before the plant is set up. Most of such tariffs are based on their experience using indigenous low-grade coal and transport cost which are often impractical. Such low power tariffs are not remunerative for private companies and many foreign companies do not invest in large capital-intensive power projects in India for the same reason.

The best option for the Governments to solve energy problems in India is to open to foreign investments and allow latest technologies in power generation and distribution. It is up to the investing companies to decide the right type of fuel, right of equipment, source and procurement, power technology to be adopted and finally the tariff.  India has come a long way since independence and Governments should focus on Governing rather than managing and controlling infrastructure projects. The latest scam widely debated in Indian media is “Coal scam’. It is time India moves away from fossil fuel and allow foreign investments and technologies in renewable energy projects freely without any interference. India needs large investments in building power and water infrastructures and it possible to attract foreign investment only by infusing confidence in investing companies. It is not just the size of the market that is to be attractive for investors but  they also need a conducive, fair and friendly   environment for such investment.

Coal is still the dominant fuel used for power generation due to its low-cost and abundant availability despite its emission problems and global warming issues. Companies around the world are trying to improve the efficiency of coal-fired power plants and cut emissions by various methods. The idea is to prepare an ultra clean coal with very low ash content in the form of coal-water slurry that can be directly injected into a diesel engine. Direct firing of coal requires micronising to less than 20-30 microns for diesel engine and less than 10 microns for turbines and producing a coal water slurry with at least 50% w/w coal content. The thermal and combustion efficiency of coal water fuel seems to be matching to that diesel engine at up to 1900rpm according to literatures. Still more research is required on engine modification and engine nozzle to handle coal water slurry because of its abrasive nature. If coal can be converted into a fluid like a diesel or Fuel oil then it can substitute diesel at reduced cost. However the Carbon problem needs to be addressed by ongoing research on sequestration.

Nanotechnology is an emerging field that offers hope to produce Colloidal coal water fuel that resembles fuel oil that may be suitable for direct injection into diesel engine with little modifications. The colloidal suspensions of coal in water (CCW) are produced using a proprietary wet-combination device. These suspensions are a new material with new properties.

“First, the colloidal fraction plus water is a pseudo fluid good for transport, handling and suspension of large particles. Second, the surface area per unit volume of coal available for chemical reaction and burning is greatly increased and finally, CCW may be milled with a third fluid, seeding the mixture with submicron coal. The colloidal nature of the majority of particles provides for very good features such as outstanding long-term stability, in contrast to regular coal water slurries (CWS) which rapidly sediment under storage. Moreover, the very small particles create an increased reactivity to combustion because small particles with large surface area react faster than large particles with the same volume.”

A company based in Panama has conducted experiments using colloidal coal water fuel and published the following information.

CCW suspension preparation and properties Characterization

“The colloidal dispersion are prepared in two stages: first by a bench mill and then by our wet- comminuting device. The bench mill was manufactured by IKA®- Group. After grinding,   samples were sieved using mesh size sieves 40 (400 μm), 70 (212 μm) or 140- (106 μm) and the passing particles were retained and used to prepare coal suspensions with various water contents (30 to 50 %), surfactants and other type of additives. These mesh sizes are not foreign to coal-fired power plants.  It is noteworthy that a preliminary formulation study is first necessary to decide the type and concentration of additives that are best suited to improve coal particle wetting and reduce viscosity. The additives were mostly surfactants and viscosity controlling agents and every type of coal tested usually required a specific formulation. In general, it was found that nonionic surfactants were good wetting agents, in concentrations varying from 0.1 to 0.6 w/w %. Some of the additives used to reduce viscosity by decreasing particle interactions, before or after the wet comminuting process, were amines. The suspension formulation previous to the wet-comminuting instance was very simple since what was basically required was a good wetting agent or a combination of two wetting agents. The idea was to have a uniform mixture with as low viscosity as possible.

Particle size of coal samples was determined by direct observation in an optical microscope, or by sieving using five or six different sieves ranging from 20 to 400 μm, or using a laser diffraction apparatus made by Microtrac Corporation, Nanotrac model, having a measurement range from 8 nm to 6.5 μm. Neither of these methods was sufficient to obtain a complete characterization of the particle size distributions, but a combination of the three allowed for a good assessment of what really was in the suspension, before and after the wet-comminuting process.

In our study, the percentage of mass passing the 635 mesh size sieve (< 20 μm) was used as an indicator of wet-comminuting process efficiency (generation of colloidal particles), given that microscopic observations generally showed that particles between 8 to 20 μm were very scarce. The preparation of the colloidal suspension of coal was centered in a technology that is totally based on fluid mechanics principles. As mentioned above, a preliminary suspension was prepared in a tank with low agitation and the appropriate water and surfactants contents. This suspension is then fed into a device that spins a film of the fluid to the walls of a cylindrical vessel at very high-speed and under cavitation free conditions. The resulting flow field induces a “particle trap” region where coal particles are locally concentrated above their nominal value and under very high shear. Particles are then milled to very small sizes by a wet-comminuting mechanism. Friction heating is controlled by a chilled water jacket around the vessel.

A schematic view of the set up is shown in the attached figure.

The energy consumed by the wet-comminuting device was evaluated by monitoring the power (voltage and amperage) during the process. The latter has two components, the power required to drive the motor shaft and mechanical seal, and the net power consumed by the fluid during comminuting. It was found that the net power divided by the mass flow rate, in terms of kWH/ton depended on coal content and viscosity of the preliminary slurry, exhibiting values of 30 to 80 kWh/ton. The energy consumed by the motor shaft and seal would account for 50 to 80 % of the total power consumed. Using the method described above, 100 gallons of CCW were prepared, using an Eastern bituminous coal that was previously grinded to 200 mesh. Several properties of this sample were characterized.”

Colloidal coal water fuel has certain distinct advantages over conventional coal water slurry for power generation using conventional diesel engine and turbines. Further research and development work is needed before it can be expanded for large-scale production. But it offers a hope to improve the efficiency of existing coal-fired power plants and reduces emissions.

People in the chemical field will understand the concept of ‘irreversibility’. Certain chemical reactions can go only in one direction and but not in the reverse direction. But some reactions can go on either direction and we can manipulate such reactions to our advantages. This concept has been successfully used in designing many chemical reactions in the past and many innovative industrial and consumer products emerged out of it. But such irreversible reactions also have irreversible consequences because it can irreversibly damage the environment we live in. There is no way such damage can be reversed. That is why a new branch of science called ‘Green Chemistry’ is now emerging to address some of the damages caused by irreversible chemical reactions. It also helps to substitute many synthetic products with natural products. In the past many food colors were made out of coal-tar known as coal-tar dyes. These dyes are used even now in many commercial products. Most of such applications were merely based on commercial attractiveness rather than health issues. Many such products have deleterious health effects and few of them are carcinogenic. We learnt from past mistakes and moved on to new products with less health hazards. But the commercial world has grown into a power lobby who can even decide the fate of a country by influencing political leaders. Today our commercial and financial world has grown so powerful that they can even decides who can be the next president of a country rather than people and policies. They can even manipulate people’s opinion with powerful advertisements and propaganda tactics by flexing their financial muscles.

Combustion of fossil fuel is one such example of ‘irreversibility’ because once we combust coal, oil or  gas,  it will be decomposed into oxides of Carbon, oxide of  Nitrogen and also oxides of Sulfur and Phosphorous depending upon the source of fossil fuel  and purification methods used. These greenhouse gases once emitted into the atmosphere we cannot recover them back. Coal once combusted it is no longer a coal. This critical fact is going to decide our future world for generations to come. Can we bring back billions of tons of Carbon we already emitted into the atmosphere from the time of our industrial revolution? Politicians will pretend not to answer these question and financial and industries lobby will evade these question by highlighting the ‘advancement made by industrial revolutions’. People need electricity and they have neither time nor resources to find an alternative on their own. It is open and free for all. People can be skeptical about these issues because it is ‘inconvenient for them’ to change But can we sustain such a situation?

Irreversibility does not confine only to chemical reactions but also for the environment and sustainability because all are intricately interconnected.Minig industries have scared the earth, power plants polluted the air with greenhouse emission and chemical industries polluted water and these damages are irreversible. When minerals become metals, buried coal becomes power and water becomes toxic effluent then we leave behind an earth that will be uninhabitable for our future generations and all the living species in the world. Is it sustainable and can we call it progress and prosperity? Once we lose pristine Nature by our irreversible actions then that is a perfect recipe for a disaster and no science or technology can save human species from extinction. One need not be scientist to understand these simple facts of life. Each traditional land owners such as Aborigines of Australia or Indians of America and shamans of Indonesia have traditionally known and passed on their knowledge for generations. They too are slowly becoming extinct species in our scientific world because of our irreversible actions. Renewability is the key to sustainability because renewability does not cause irreversible damage to Nature.