Developments in Solar Energy Technology in Late 20th Century
Charles Fritz had developed the first solar cell that turned the sun's rays into electricity in 1883. His solar cell, however, had a conversion rate of only around two percent, meaning that of the total energy that hits the surface of the cell, only two percent converts to usable energy.
The next noteworthy event in the history of photovoltaic cells occurs in 1953 when Calvin Fuller, Gerald Pearson, and Daryl Chaplin of Bell Laboratories improved on Charles Fritz’s solar cell by using silicon to enhance the efficiency rate to six percent. The efficiency was later enhanced to 11 percent, providing solar cells with practical value. By 1960, Hoffman Electronics increased the efficiency of the solar cell to 14 percent, and recent research has improved the solar cell efficiency to 20 percent. The increase of energy efficiency of the solar cells made it suitable for commercial use, and the first such commercial application of solar cells were to supply energy for satellites.
A major drawback of the solar cell was its high cost of $300 per watt that made it unaffordable to ordinary users. The OPEC energy crisis of the 1970s made it imperative to find alternative forms of energy. The US Department of Energy funded the Federal Photovoltaic Utilization Program, and businesses received incentives to develop solar energy technology. This led to the establishment of 150 businesses that manufactured solar cells, with annual sales of $0.8 billion.
The subsequent research and mass production has led to a dramatic drop in prices and solar cells now cost about $20 per watt. This huge cost savings opened the use of solar cells in railroads, lighthouses, offshore oilrigs, buoys, remote homes, and even in watches, calculators, and toys.