logo
#

Latest news with #MichaelFaraday

Unlocking The Sun: How Solar Panels Really Work
Unlocking The Sun: How Solar Panels Really Work

India.com

time11-07-2025

  • Science
  • India.com

Unlocking The Sun: How Solar Panels Really Work

To make these solar panels, Indian companies need PV cells—which are the small parts that convert sunlight into electricity. Since India doesn't yet make enough of these cells on its own, it is importing more from China to support its growing solar panel production. In simple words, as India builds more solar panels at home, it is also buying more parts from China to keep up with the demand. Two Simple Ways the World Generates Electricity: Moving Machines and Sunlight Power There are basically two main ways to produce electricity. The first method was discovered by Michael Faraday in 1821. It works by spinning a coil of wire near a magnet or spinning a magnet near a coil of wire. This movement creates electricity. This idea became useful by 1890 and is still the main way we produce electricity today. It is used in power plants, wind turbines, and hydroelectric dams, where machines spin to generate power. The second method uses solar photovoltaic (PV) cells, which are made from materials like silicon, found in sand. This method was first noticed by Alexander Becquerel in 1839, when he saw that sunlight can directly produce electricity. This is called the photovoltaic effect and is used in solar panels. So, one method makes electricity by spinning parts inside machines, and the other makes electricity directly from sunlight using special materials. Breakthroughs That Made Solar Power Possible The first useful solar cell was made in 1954 by scientists at Bell Labs—Chapin, Fuller, and Pearson. They used a special type of silicon called doped silicon, which helps produce electricity from sunlight more efficiently. This big step was possible because of two important discoveries: Albert Einstein explained how light can produce electricity—called the photoelectric effect. He won the Nobel Prize for this work. Jan Czochralski, a scientist from Poland, found a way to make single-crystal silicon, which is now the main material used in most solar cells. These two breakthroughs helped make modern solar panels possible. Simple Solar Tech Beyond Power Grids Unlike solar panels (PVs) that send electricity into the main power grid and are taxed and regulated, other solar technologies like solar water heating, space heating, and solar cooling usually work on their own. They don't connect to the electricity grid. For example, solar cooling uses a method called absorption refrigeration, which can cool indoor spaces to around 19°C even when it's 40°C outside. A solar cooler uses energy from the sun to run a cooling system—just like how a fridge or air conditioner works, but without using regular electricity. These technologies are like the solar panels used in faraway places, where there is no power supply. In such areas, solar panels are mainly used to charge batteries and give basic lighting. Focusing Sunlight for Everyday Use Different parts of the world receive different amounts of sunlight, a measure called solar insolation. While the sun gives us a huge amount of energy, it is spread out thinly over large areas. This means that at any one place, the sunlight is not very strong, making it hard to use directly for things like generating electricity or running machines. To solve this problem, we use special tools and technologies to collect and focus the sunlight in one spot. These include parabolic troughs, Fresnel lenses, and other solar concentrators. Once the sunlight is focused, it becomes strong enough to be used for heating, cooking, removing salt from seawater (desalination), and producing electricity. In simple terms, these tools help us make the most of the sun's energy by turning weak sunlight into powerful heat or power. How Silicon Behaves in Solar Cells PV (photovoltaic) cells are made from semiconductors like silicon. A semiconductor is a special material that conducts electricity better than insulators (like plastic) but not as well as metals (like copper). Silicon is the most common semiconductor used in solar cells. On its own, silicon doesn't conduct electricity very well. But when it is heated or exposed to sunlight, or when it is treated with small amounts of other elements (a process called doping), it starts conducting electricity more effectively. This makes silicon very useful for making solar panels and electronic devices. Copper, which is a good conductor of electricity, becomes less efficient when it gets hot—its resistance increases, which slows down the flow of current. That's why it is called an Ohmic conductor. But silicon works in the opposite way. At room temperature, it does not conduct electricity well. But as the temperature goes up, silicon starts conducting better. This special behaviour makes it a non-Ohmic material. This unique property of silicon is one reason why it is used in solar cells to convert sunlight into electricity. How Electrons Flow to Make Electricity According to quantum theory—which explains how very tiny particles like electrons behave—electricity flows only when electrons have enough energy to move freely. In simple terms, electrons can only sit at fixed energy levels, just like people can only stand on the steps of a staircase—not in between. To help carry electricity, electrons need to reach a higher energy level called the conduction band, where they can move around freely, like water flowing in a river. When electrons are at a lower energy level, called the valence band, they stay close to their atoms and can't move around, so they don't help in producing electricity. To jump from the valence band to the conduction band, an electron needs extra energy. This energy can come from heat—when atoms vibrate more due to high temperature—or from light, like sunlight hitting a solar cell. Once the electron absorbs this energy, it makes the jump and starts flowing, helping generate electric current. If it doesn't get enough energy, it stays in place and no electricity is produced. How Light Helps Electrons Move Light is a form of energy, and it can act like a wave or like tiny particles called photons, depending on how we observe it. Each photon carries a small amount of energy. When sunlight hits a solar panel, these photons strike the electrons in the valence band (the lower energy level). If a photon has enough energy, it can give that energy to an electron, helping it jump to the conduction band, where the electron can move freely and create electricity. In simple words, light gives electrons the push they need to start flowing and produce power. When Light Has Just the Right Energy For an electron to jump from the valence band to the conduction band, the photon (light particle) must have just the right amount of energy. This rule was first explained by Albert Einstein in his photoelectric effect theory. The needed energy is called the band gap—it is the energy difference between the two bands, and it is measured in units called electron volts. If the photon's energy is less than the band gap, the electron won't move. If the photon has more energy than needed, the extra energy is wasted as heat, and this can even cause some electrons to be lost. So, for solar panels to work well, the light must match the material's band gap, giving just enough energy to move the electrons and make electricity without much waste. Why Some Sunlight Can't Be Used To produce electricity from sunlight, two conditions must be met: The photon must have the right amount of energy (called the energy criterion). The movement of the electron must match certain patterns (called the symmetry criterion), though this is less important here. Because of these rules, about 50% of sunlight that reaches Earth can't be used by regular solar cells made of crystalline silicon. Around 20% of the sunlight has too little energy, so it can't move the electrons. About 30% has too much energy, and the extra energy turns into heat, which is wasted. Some other materials—like gallium arsenide, cadmium telluride, and copper indium selenide—can absorb different parts of sunlight more effectively. But they are hard to find, tricky to handle, or harmful to the environment, which makes them difficult to use widely. That's why crystalline silicon remains the most common material in solar panels, even though it doesn't use all the sunlight. How Boron and Phosphorus Make Solar Cells Work In silicon-based solar cells, small amounts of two elements—phosphorus and boron—are added to pure silicon to change how it behaves. When phosphorus is added to silicon, it gives the silicon extra electrons. This side is called the n-type (negative) region. When boron is added, it creates 'holes'—which means there are fewer electrons. This side is called the p-type (positive) region. Where the p-type and n-type silicon meet, a special area forms called a p-n junction. At this junction, an electric field is created—like a built-in push that wants to move electrons in one direction. When sunlight hits the solar cell, it gives energy to the electrons. These electrons jump across the p-n junction and start flowing. This flow of electrons is what we call electricity—just like in a battery. So, by carefully combining boron and phosphorus with silicon, scientists create a material that converts sunlight into electric power in a simple and clean way. How Electricity Flows and Why Some Energy Is Lost When we connect a wire or device (called a load) to a solar cell, the electrons start to flow from the negative side (with more electrons) to the positive side (with fewer electrons). This movement of electrons through the load completes the circuit and creates electricity we can use. As long as sunlight is available, this flow of electricity can keep going without stopping. But even from the 49.6% of sunlight that solar cells can use, some energy is still lost: Solar panels get hot—they can become 30 to 40°C hotter than the air around them. This heat is released back into the air and causes about 7% energy loss. Another 10% energy is lost due to a problem called the saturation effect. This happens because electrons and holes (positive charges) don't move at the same speed, which weakens the electric push (voltage) in the solar cell over time. So, while solar cells are very useful, not all sunlight is turned into electricity, and some energy is always lost as heat or due to how charges move inside the cell. Why Solar Cells Can't Use All Sunlight Even in the best conditions, a single-junction silicon solar cell can only turn about 33.7% of sunlight into electricity. This limit is called the Shockley-Queisser limit, and it's based on how solar energy and materials work at the atomic level. This means that, in theory, two-thirds of the sunlight's energy is always lost, no matter how good the solar cell is. In real life, solar panels lose even more energy because of practical issues, such as: Some parts of the panel get more sunlight than others (uneven lighting). Small differences in how each cell is made during production can lead to mismatched voltages across the panel. All these factors together make sure that actual efficiency is always lower than the theoretical limit. How Much Sunlight Solar Panels Really Use In real-world use, solar panels lose more energy during other steps, like: Changing the electricity from DC (direct current) to AC (alternating current) so it can be used in homes. Adjusting the panel to work at its best power point (MPP) throughout the day. Because of these extra losses, the actual efficiency of solar panels is lower: In the best lab conditions, silicon solar panels can reach about 25% efficiency. In the real world, even the best commercial panels usually reach only about 20% efficiency. To understand how good this is—natural photosynthesis (how plants use sunlight to grow) only captures about 3% to 6% of sunlight. So, even with losses, solar panels are much better at using sunlight than plants. Making Solar Cells from Shiny Silicon Natural silicon is very shiny, so it reflects a lot of sunlight. To stop this and help the silicon absorb more sunlight, a special anti-reflection coating is added—usually made of tin oxide or silicon nitride. This coating also gives solar panels their blue color. Unlike plants, which build their energy systems naturally and at normal temperatures, making solar panels takes a lot of energy. The process starts by purifying silicon. Natural silicon is cleaned until it is 99% pure, using something called the Czochralski process. In this method, silicon is melted, and then slowly cooled and shaped into a single large crystal, called an ingot. These crystals are later cut and used to make solar cells. So, while solar power is clean, making solar cells requires careful steps and energy. Cutting and Cost-Saving in Solar Cell Making After purifying silicon into large crystals (called ingots), these are sliced into thin wafers to make solar cells. But this slicing causes about 20% of the silicon to be lost as dust, which makes the process expensive. To reduce this waste and cost, scientists have developed new methods, like ribbon technology, which makes thin silicon strips without cutting big crystals. This saves material and money. Another cheaper option is using amorphous silicon, which doesn't have a clear crystal shape. Though it has natural defects, these can be fixed by adding a small amount of hydrogen. This helps improve its performance. As Dr. Arunangshu Das from IIT's Centre for Atmospheric Sciences explains, these new techniques help lower the cost of making solar cells, making solar power more affordable. New Types of Solar Cells for Better Efficiency Some solar panels are now made using multijunction amorphous cells, which are designed to capture more parts of sunlight. These can reach a theoretical efficiency of up to 42%, though in real-life use, they usually reach around 24% efficiency. According to Dr. Anurag Das, these advanced designs are helping to improve how much electricity we can get from sunlight. Today's solar panel technologies are grouped into three generations: First-generation: Uses thick silicon wafers, about 200 micrometers thick. These are the traditional and most common type. Second-generation: Uses thin silicon layers, only 1 to 10 micrometers thick. These are cheaper to make and use less material. Third-generation: Includes multijunction cells, tandem cells, and quantum dots. These new technologies can produce more electricity from each photon and, in some cases, even go beyond the normal efficiency limit (called the Shockley-Queisser limit). These improvements are helping make solar energy more efficient and powerful, using the same sunlight more wisely. Why Solar Power Is Getting Cheaper The cost of solar electricity is falling fast. Back in 2010, it cost around $4 to $5 for every watt of DC power. By 2023, the cost dropped to about $2.80, and for large utility solar systems, it went down even further to $1.27 per watt. This drop matches the U.S. government's SunShot goal of bringing the cost to $1 per watt for full solar systems. Let's look at where the money goes in a solar setup: 38% is spent on the solar panels (modules). 8% goes to power electronics, mostly the inverter that changes DC to AC. 22% covers wiring and mounting (how the panels are fixed in place). The remaining 33% is for hardware balance—this includes labour, permits, company overheads, and profits. Now that single crystal solar cells are already close to their maximum power output, the best way to reduce costs further is by saving money in the hardware balance part—like making installations easier, faster, and cheaper. What Affects Solar Panel Performance Over Time Solar panels slowly lose efficiency over the years—about 0.5% per year. But most panels still work well for 20 to 25 years. Many people think hot, sunny places like deserts and tropical regions are best for solar panels. While these areas get more sunlight, solar panels actually work better in cooler, clear-weather conditions. That's because heat reduces their efficiency. This makes it harder for low- and middle-income countries, especially those in tropical or equatorial regions, to fully benefit from solar energy. They may face challenges like high temperatures, lack of infrastructure, or less efficient panel performance. Also, air pollution can block sunlight and reduce the amount of energy produced by about 2 to 11%. On top of that, dust and dirt (called soiling) on the panels can cause another 3 to 4% loss in energy each year. So, while solar power is a clean and powerful energy source, climate, pollution, and maintenance all affect how well it works in different places. Challenges of Using Solar Panels in Cities Cleaning solar panels regularly is important, but it can be risky and difficult. When the sun is shining, the panels are electrically active, which means touching them with water or tools can be dangerous. Also, cleaning them often uses a lot of water, which can be a problem in dry areas. In crowded cities, solar panels can also trap heat, making the area around them hotter. This can lead to what is called the urban heat island effect, where cities become warmer than nearby rural areas. Other solar technologies, like solar water heaters or solar cookers, can support solar panels, but they can't fully replace them. Whether solar power alone can fully replace fossil fuels and help achieve a carbon-free future is still being studied and debated by scientists. Why India Depends on China for PV Cells India is growing fast in solar power, but it still depends heavily on China for solar photovoltaic (PV) cells. Here's why: China Makes Them Cheaper China has a well-established, large-scale manufacturing system for PV cells. It produces them in huge quantities, which makes the cost much lower than what Indian companies can offer right now. Lack of Raw Material Processing in India PV cells need high-purity silicon and other special materials. China controls most of the global supply chains for these materials and has better technology for purifying and processing them. Advanced Technology and Machinery Chinese factories use latest machines and production methods that make PV cells more efficient and cheaper. India is still building this kind of advanced manufacturing base. Government Support in China The Chinese government gives strong support through subsidies, cheap land, and loans, which helps their companies sell at lower prices globally. Indian manufacturers struggle to compete with these advantages. Slow Growth of Local Industry Although India has plans like PLI (Production Linked Incentive) schemes to boost local solar manufacturing, it will take time to build full supply chains and reduce import dependence. In short, India depends on China for PV cells today because China is cheaper, faster, and better equipped. But India is working towards becoming self-reliant in solar manufacturing in the coming years.

Solar energy systems: What is the science behind clean energy generation?
Solar energy systems: What is the science behind clean energy generation?

Indian Express

time01-07-2025

  • Science
  • Indian Express

Solar energy systems: What is the science behind clean energy generation?

— Arunangshu Das India's imports of solar photovoltaic (PV) cells from China jumped 141 per cent, seemingly driven by the increase in domestic solar PV module manufacturing capacity. But what are solar photovoltaic (PV) cells and how are they used in solar panels? There are fundamentally two ways to generate electricity. The first is based on electromagnetic induction developed by Michael Faraday in 1821 and became commercially viable by 1890. It remains the backbone of global electricity production today. The second method uses photovoltaic (PV) cells, which are made from semiconductors like elemental silicon. It was first observed by Alexander Becquerel in 1839 as the photovoltaic effect. But it wasn't until 1954 that researchers at Bell Labs (Chapin, Fuller, and Pearson) created the first practical solar cell using doped silicon. This progress was built on two key breakthroughs: Albert Einstein's explanation of the photoelectric effect, for which he won the Nobel Prize, and Polish scientist Jan Czochralski's development of single-crystal silicon, which is the standard material in solar cell manufacturing today. Unlike PVs, which feed tradeable, regulated and taxed electricity into the grid, technologies like solar heating, water heating and even solar cooling are primarily standalone systems. For instance, solar cooling operates through an absorption refrigeration mechanism that can achieve indoor temperatures as low as 19°C even when outdoor temperatures reach 40°C. These technologies are similar to PV panels installed in remote areas far away from grids and are mainly used for battery charging and basic lighting. Worldwide solar insolation – the amount of solar radiation received – across different regions varies tremendously. Although solar energy is abundant, it is diffuse and spread thinly across large areas. That is why different technologies (like parabolic troughs, Fresnel lenses, and other concentrators) are used to focus sunlight for various purposes, including heating, cooking, desalination, or even generating electricity. PV cells are made from semiconductors like elemental silicon. Unlike metallic conductors like copper – known as Ohmic conductors, whose resistance to the flow of current increases with temperatures – silicon behaves differently. It is a poor conductor at room temperature, but its conductivity increases as the temperature rises, making it a non-Ohmic material. According to quantum theory, electrical conduction requires the presence of electrons in a higher energy quantum state called conduction band, where they flow much like water in seas. In contrast, electrons in the lower-energy valence band are localised and cannot contribute to electric current. For transition from lower-energy valence band to higher-energy conduction band, energy must be supplied. This energy can come from high thermal motions of atoms grossly manifested as the temperature of the system or some other energy input like light. Light as a form of energy is observed either as a wave or particle behaving like discrete packets of energy called photons depending on the nature of the experiment. When photons strike electrons in the valence band, they can transfer their energy to those electrons, allowing them to jump to the conduction band. But this transition happens if certain conditions, first explained by Einstein in his photoelectric effect theory, are met – the energy of the photon must be equal to the difference of energy between the two bands, also called band gap, which is measured in electron volts. Photons with higher energy will transfer the excess energy as heat leading to loss of electrons. Apart from the energy criterion, there is also a symmetry criterion which is less relevant in this case. These two conditions immediately render approximately 50.4 per cent of the total solar spectrum unusable for electricity generation from PV cells made of crystalline silicon – 20.2 per cent of photons have lower energy, while 30.2 per cent have higher energy that is wasted as heat. Other materials – such as gallium arsenide, cadmium telluride, and copper indium selenide – can capture different portion of the solar spectrum. However, issues like scarce natural abundance, handling difficulty, and environmental toxicity limit their widespread use. In silicon-based PV cells, small amounts of phosphorus and boron are deliberately added to create regions with an excess of electrons and others with a deficit – known as 'holes'. This difference in charge between the two regions creates what's called a p-n junction. The junction of two such regions generates a driving force (electric potential) when sunlight strikes the material creating a cell just like a battery. If an external load is added to this cell, electrons will flow from the negative charge rich region through the load to the positive charge region completing the circuit. These can go on infinitely if light is available. Even from the usable 49.6 per cent of the solar spectrum, additional losses occur. PV cells can heat up to 30–40°C above ambient temperature, and radiate heat which accounts for about 7 per cent energy loss. Another 10 per cent loss comes from the imbalance in charge mobility – known as the saturation effect – which weakens the electric potential over time. These lead to the final theoretical efficiency of 33.7 per cent for single-junction silicon solar cells called Shockley-Queisser limit. Other inefficiencies can still stem from real world situations like differential illumination of cells and variations during production leading to different open circuit potentials between the cells. Considering real world losses due to further downstream processes like DC to AC conversion and maintaining maximum peak power (MPP), the average efficiency of crystalline PV cells stands at 25 per cent under best laboratory conditions and best commercial cells achieve 20 per cent efficiency. For comparison, photosynthesis captures only 3-6 per cent of the total available solar radiation energy. Since natural silicon is quite reflective, an anti-reflection transparent coat of tin oxide or silicon nitride is applied that gives the cells and modules the blue colour. Compared to the completely renewable, ambient temperature assembly of proteins in biological photosystems, PV systems require significant energy input. The production of PV cells begins with the purification of elemental silicon to 99 per cent by the Czochralski process – it involves melting silicon and slowly crystallizing it into single-crystal ingots. Slicing the purified ingots into wafers causes about 20 per cent material loss as silicon dust. Thus, the high cost of single crystal technology has called for the development of alternative methods like ribbon technology that largely bypass sawing of ingot. Amorphous silicon-based cells also have lower cost and their inherent crystal defects are mended by alloying with hydrogen. Multijunction amorphous cells have been designed to capture a larger solar spectrum and achieve theoretical efficiency up to 42 per cent, yet under practical conditions, 24 per cent efficiency have been realised. PV technologies are currently categorised into three generations: first-generation thick crystalline wafers (~200 µm), second-generation thin wafers (1-10µm) and third generation multijunction tandem cells and quantum dots, which can generate more charge separation per photons thereby some of them can exceed Shockley-Queisser efficiency limit. The price of PV electricity, measured in dollars per watt of direct current (DC) peak power, fell from $4–5 in 2010 to $2.8 in 2023 (and $1.27 for utility-scale systems), aligning with the US Department of Energy's SunShot programme target of $1 per watt for installed systems. Broken down system costs in categories: 38 per cent is for the modules, 8 per cent for power electronics (mostly inverter), 22 per cent for wiring and mounting and 33 per cent for hardware balance systems, which include labour, permit, overhead costs and profit. With single crystal PV cells reaching their theoretical maximum output, the greatest scope of cost reduction lies in hardware balancing categories. Annual efficiency loss is around 0.5 per cent, with most modules remaining effective for 20–25 years of operation. Contrary to common belief, while tropical and desert regions receive more sunlight, PV modules operate more efficiently in cold, clear conditions due to lower thermal losses. Thus, the dominance of PV as a renewable energy source in low- and middle-income countries – many of which are in tropical or equatorial regions – remains challenged by climatic and infrastructural constraints. In addition, rising air pollution can reduce solar insolation by ~2–11 per cent, while soiling contributes a further 3–4 per cent annual loss in output. Regular cleaning of panels is a hazardous task as under sunlight cells are electrically active and can also be water intensive. In heavily populated areas PVs can trap large amounts of heat triggering urban heat island phenomenon. While other solar technologies can partially complement PVs, its role in complete carbon neutral energy generation is an ongoing scientific debate. What are solar photovoltaic (PV) cells and how are they used in solar panels? Why is regular cleaning of photovoltaic (PV) panels considered both hazardous and resource-intensive? What is the urban heat island effect, and how might photovoltaic (PV) installations contribute to it in densely populated areas? In what ways do infrastructural and climatic constraints limit the effectiveness of photovoltaic (PV) systems in tropical and low-income regions? (Dr. Arunangshu Das is the Principal Project Scientist at the Centre for Atmospheric Sciences, Indian Institute of Technology, Delhi.) Share your thoughts and ideas on UPSC Special articles with Subscribe to our UPSC newsletter and stay updated with the news cues from the past week. Stay updated with the latest UPSC articles by joining our Telegram channel – IndianExpress UPSC Hub, and follow us on Instagram and X.

Moving suspended gold particles to Royal Society is a delicate job
Moving suspended gold particles to Royal Society is a delicate job

Times

time30-06-2025

  • Science
  • Times

Moving suspended gold particles to Royal Society is a delicate job

It is not far from the Royal Institution to the Royal Society. Less than a mile. But this week, as Charlotte New travelled between the two — holding some bottles of pinkish liquid containing a little sprinkling of gold — every foot was planned. The taxi driver knew the preferred route, knew not to brake suddenly, and knew that New, head of heritage at the Royal Institution, would be very upset with jolts. 'We do not want jiggling,' she said before the transportation. 'There is to be no jiggling.' She would not tell The Times when she was leaving though — for insurance purposes it had to remain secret. Only once in the 170 years since Michael Faraday accidentally made an odd suspension of gold particles have these, his colloids, left the Royal Institution, the scientific organisation famed for its Christmas lectures. Then, it was because of the blitz. The country's most precious treasures were moved to the slate mines of Wales. These odd bottles, which glow ruby in the light, spent several years out of the light, alongside old masters, the Magna Carta and first folios of Shakespeare. This time, they are being moved for an exhibition at the Royal Society, Britain's national scientific academy. There, they will appear alongside other colloids as researchers investigate how a phenomenon discovered by mistake by Faraday, long considered an optical curiosity, might have practical value. The colloids were created as part of attempts by the Victorian polymath, most famous for his work on electricity, to make ever-thinner sheets of gold. He was investigating, among other things, the optical properties of the gold. To make gold leaf as thin as possible, he washed it in acid. But then he noticed that this run-off was itself interesting. When he shone a light through it, it scattered with a ruby glow. He realised it was hitting tiny particles of gold, suspended in the liquid. The particles are small enough that over almost two centuries they have in some of the bottles stayed suspended — held aloft by the movement of water molecules. Others have settled. No one is sure what shaking will do. 'When we clean them, we use paint brushes,' New said. 'We dust around them, and not very often.' The Royal Institution is taking the risk of moving them, along with Faraday's notebook, as part of its 200th anniversary celebrations of its Christmas lectures, as well as Faraday's discovery of benzene. At the Royal Society's summer exhibition, opening from July 1 to 6, they will not merely be there as part of scientific heritage. They will be exhibited alongside some modern colloids, with potentially important applications. Dr Aliaksandra Rakovich, of King's College London, said: 'Faraday cared about colour. He was curious about that, and that's what he investigated.' But, today, his work is seen as a landmark in nanotechnology, and among the first investigations into the properties of very small particles. Dr Simon Freakley, of the University of Bath, said: 'Gold is perceived as being inert. When you make these very, very small particles of gold, they actually become incredibly reactive.' In particular, if you shine a laser at them then they get extremely hot. This can be a low-energy way to facilitate difficult reactions. Freakley and Rakovich are looking at ways to harness the reactive properties of colloids in industrial processes and also for tasks such as removing air pollution. Faraday's colloids will, hopefully, not be reacting though. We do know, now, that they made the journey unharmed. By shining a laser, the Royal Institution confirmed the properties were unchanged during the second journey of their lives. Now New just has to get them back.

Ask Fuzzy: How does an induction cooker work?
Ask Fuzzy: How does an induction cooker work?

The Advertiser

time14-05-2025

  • Science
  • The Advertiser

Ask Fuzzy: How does an induction cooker work?

You can usually tell whether a device is inefficient by the amount of wasted heat. An obvious example is the internal combustion engine which burns more than half its fuel doing nothing more than getting hot. The best that most cars can manage is about 20-40 per cent efficiency. That means 60-80 per cent is wasted. Great if you want to cook sausages, but it doesn't you get anywhere. Televisions, computers and power charges all get warm to varying degrees and, in each case, that means wasted energy. Then there are kitchen stoves such as gas and those with old-style heater elements. They certainly get hot but, as with cars, much of that goes into heating itself and the air around it without doing any useful work. A good indicator that induction cooktops are highly efficient (about 84 per cent) is that the "hot plates" are often cool enough to touch (carefully) shortly after they finish cooking. The history of electromagnetic induction goes back to 1820 when Danish physicist Hans Christian Oersted discovered that an electric current generates a magnetic field. Then in 1821 English physicist Michael Faraday made a primitive electric motor by placing a magnet near a piece of wire. When he fed an electric current into the wire, it generated a magnetic field, pushing itself away from the permanent magnet. In 1831, he flipped the idea around by rotating a coil of wire through a magnetic field to induce an electric current, thus inventing the electricity generator. MORE ASK FUZZY: Now we see induction used in electric toothbrushes cradles and wireless phone chargers. As the name implies, induction stoves work on the same principle. An alternating current running through the tightly wound metal coil inside a cooking zone induces a high-frequency alternating magnetic field. That produces whirling electrical currents inside the pan. The repeated magnetising and demagnetising (magnetic hysteresis) turns it into a heater. The beauty of this is that it heats the pan directly instead of an element and the air around it. If there's no pan on the cooking zone, the cooking zone stays cold. Although your home power supply alternates at 50Hz, an induction cooktop is 20-40kHz, which is 500 to 1000 times faster. That offers a couple of advantages. One is that being above the range of hearing, stops any annoying buzzing. The other is that it prevents your pots from dancing around on the cooktop. The Fuzzy Logic Science Show is at 11am Sundays on 2xx 98.3FM. Send your questions to AskFuzzy@ Podcast: You can usually tell whether a device is inefficient by the amount of wasted heat. An obvious example is the internal combustion engine which burns more than half its fuel doing nothing more than getting hot. The best that most cars can manage is about 20-40 per cent efficiency. That means 60-80 per cent is wasted. Great if you want to cook sausages, but it doesn't you get anywhere. Televisions, computers and power charges all get warm to varying degrees and, in each case, that means wasted energy. Then there are kitchen stoves such as gas and those with old-style heater elements. They certainly get hot but, as with cars, much of that goes into heating itself and the air around it without doing any useful work. A good indicator that induction cooktops are highly efficient (about 84 per cent) is that the "hot plates" are often cool enough to touch (carefully) shortly after they finish cooking. The history of electromagnetic induction goes back to 1820 when Danish physicist Hans Christian Oersted discovered that an electric current generates a magnetic field. Then in 1821 English physicist Michael Faraday made a primitive electric motor by placing a magnet near a piece of wire. When he fed an electric current into the wire, it generated a magnetic field, pushing itself away from the permanent magnet. In 1831, he flipped the idea around by rotating a coil of wire through a magnetic field to induce an electric current, thus inventing the electricity generator. MORE ASK FUZZY: Now we see induction used in electric toothbrushes cradles and wireless phone chargers. As the name implies, induction stoves work on the same principle. An alternating current running through the tightly wound metal coil inside a cooking zone induces a high-frequency alternating magnetic field. That produces whirling electrical currents inside the pan. The repeated magnetising and demagnetising (magnetic hysteresis) turns it into a heater. The beauty of this is that it heats the pan directly instead of an element and the air around it. If there's no pan on the cooking zone, the cooking zone stays cold. Although your home power supply alternates at 50Hz, an induction cooktop is 20-40kHz, which is 500 to 1000 times faster. That offers a couple of advantages. One is that being above the range of hearing, stops any annoying buzzing. The other is that it prevents your pots from dancing around on the cooktop. The Fuzzy Logic Science Show is at 11am Sundays on 2xx 98.3FM. Send your questions to AskFuzzy@ Podcast: You can usually tell whether a device is inefficient by the amount of wasted heat. An obvious example is the internal combustion engine which burns more than half its fuel doing nothing more than getting hot. The best that most cars can manage is about 20-40 per cent efficiency. That means 60-80 per cent is wasted. Great if you want to cook sausages, but it doesn't you get anywhere. Televisions, computers and power charges all get warm to varying degrees and, in each case, that means wasted energy. Then there are kitchen stoves such as gas and those with old-style heater elements. They certainly get hot but, as with cars, much of that goes into heating itself and the air around it without doing any useful work. A good indicator that induction cooktops are highly efficient (about 84 per cent) is that the "hot plates" are often cool enough to touch (carefully) shortly after they finish cooking. The history of electromagnetic induction goes back to 1820 when Danish physicist Hans Christian Oersted discovered that an electric current generates a magnetic field. Then in 1821 English physicist Michael Faraday made a primitive electric motor by placing a magnet near a piece of wire. When he fed an electric current into the wire, it generated a magnetic field, pushing itself away from the permanent magnet. In 1831, he flipped the idea around by rotating a coil of wire through a magnetic field to induce an electric current, thus inventing the electricity generator. MORE ASK FUZZY: Now we see induction used in electric toothbrushes cradles and wireless phone chargers. As the name implies, induction stoves work on the same principle. An alternating current running through the tightly wound metal coil inside a cooking zone induces a high-frequency alternating magnetic field. That produces whirling electrical currents inside the pan. The repeated magnetising and demagnetising (magnetic hysteresis) turns it into a heater. The beauty of this is that it heats the pan directly instead of an element and the air around it. If there's no pan on the cooking zone, the cooking zone stays cold. Although your home power supply alternates at 50Hz, an induction cooktop is 20-40kHz, which is 500 to 1000 times faster. That offers a couple of advantages. One is that being above the range of hearing, stops any annoying buzzing. The other is that it prevents your pots from dancing around on the cooktop. The Fuzzy Logic Science Show is at 11am Sundays on 2xx 98.3FM. Send your questions to AskFuzzy@ Podcast: You can usually tell whether a device is inefficient by the amount of wasted heat. An obvious example is the internal combustion engine which burns more than half its fuel doing nothing more than getting hot. The best that most cars can manage is about 20-40 per cent efficiency. That means 60-80 per cent is wasted. Great if you want to cook sausages, but it doesn't you get anywhere. Televisions, computers and power charges all get warm to varying degrees and, in each case, that means wasted energy. Then there are kitchen stoves such as gas and those with old-style heater elements. They certainly get hot but, as with cars, much of that goes into heating itself and the air around it without doing any useful work. A good indicator that induction cooktops are highly efficient (about 84 per cent) is that the "hot plates" are often cool enough to touch (carefully) shortly after they finish cooking. The history of electromagnetic induction goes back to 1820 when Danish physicist Hans Christian Oersted discovered that an electric current generates a magnetic field. Then in 1821 English physicist Michael Faraday made a primitive electric motor by placing a magnet near a piece of wire. When he fed an electric current into the wire, it generated a magnetic field, pushing itself away from the permanent magnet. In 1831, he flipped the idea around by rotating a coil of wire through a magnetic field to induce an electric current, thus inventing the electricity generator. MORE ASK FUZZY: Now we see induction used in electric toothbrushes cradles and wireless phone chargers. As the name implies, induction stoves work on the same principle. An alternating current running through the tightly wound metal coil inside a cooking zone induces a high-frequency alternating magnetic field. That produces whirling electrical currents inside the pan. The repeated magnetising and demagnetising (magnetic hysteresis) turns it into a heater. The beauty of this is that it heats the pan directly instead of an element and the air around it. If there's no pan on the cooking zone, the cooking zone stays cold. Although your home power supply alternates at 50Hz, an induction cooktop is 20-40kHz, which is 500 to 1000 times faster. That offers a couple of advantages. One is that being above the range of hearing, stops any annoying buzzing. The other is that it prevents your pots from dancing around on the cooktop. The Fuzzy Logic Science Show is at 11am Sundays on 2xx 98.3FM. Send your questions to AskFuzzy@ Podcast:

Power distribution privatisation: The why comes first, the how comes later
Power distribution privatisation: The why comes first, the how comes later

Time of India

time06-05-2025

  • Business
  • Time of India

Power distribution privatisation: The why comes first, the how comes later

Michael Faraday discovered the secret of converting kinetic energy to electrical energy in 1831. By 1883, Surat became the first place in India to have electricity, followed by Kolkata, Darjeeling, Mysore, Hyderabad, Delhi, Mumbai, Jamshedpur - in that order (Ahluwalia S., 'From Lattus to Lasers'). The rest, as they say, is history. These new 'businesses' were started by private sector companies – a trend that continued till independence. After 1947, Government assumed primary role in shaping electricity sector as a public service and infrastructure, comprising generation, transmission, distribution. As a concurrent subject in India's constitution, electricity distribution has been controlled largely by state govts, and by the Central govt in case of Union Territories (UT). Things came full circle when some state governments started revisiting the pros and cons of Private vs Govt. ownership of Power Distribution Companies (DISCOMs). In 1993, UP government privatised electricity distribution in Greater Noida. Like Jamshedpur, Greater Noida too was more like a greenfield venture, where the city itself was built from scratch – and so was power distribution infrastructure. By contrast, privatisation of existing state-owned utilities with the prime objective of turnaround started later in 1999, when Odisha government attempted privatisation of power distribution. However, it yielded mixed results, was aborted midway and then reattempted successfully in 2020, after incorporating the learnings from Delhi privatisation of 2002. Delhi government privatised power distribution with the objective of turning around a lackadaisical Delhi Vidyut Board by using the Private Sector Partner as the agent of distribution reforms . Even today, the private sector's footprint forms a miniscule portion of India's overall power distribution landscape, whereby a vast majority of India's states continue to be serviced by government owned Utilities (70+). Why is that so, even when Delhi privatisation has strongly demonstrated how across 3 DISCOMs operated by 2 different private players (Reliance Infra and Tata), the Aggregate Technical & Commercial Loss (AT&CL) has been consistently and uniformly brought down from 45 per cent-60 per cent to ~6 per cent in a couple of decades? What went right in case of Delhi? First and foremost, Delhi's AT&C Loss reduced sharply because that was precisely the bid parameter defined by the Delhi govt – AT&CL Reduction in first five years – so no wonder that the winning bidders delivered successfully on the criterion they had bid for. While both Delhi (2002) and Odisha (1999) had put up for divestment exactly 51 per cent equity in their DISCOMs, Odisha government had made the price (valuation) for that stake as its bid parameter. Later, in the second privatisation initiative in 2020, Odisha government defined bid parameter as a combination of AT&CL Reduction Target, price for 51 per cent equity and certain other criteria. This difference in approach between Delhi privatisation vis-à-vis Odisha-1999 vis-à-vis Odisha-2020 privatisation holds the key to what makes privatisation successful or not – The 'why' comes first, the 'how' follows why. Since in case of Delhi, the purpose (WHY) behind privatisation was distribution reforms (as reflected by AT&C Loss Reduction), all aspects of the privatisation process reflected that theme. In case of over-achievement of loss reduction target by the privatised DISCOM, monetary gains of such over-achievement were shared with and retained by the DISCOM (in part or in full depending on the varied ranges of performance) as an additional incentive. Similarly, there was additional incentive for collecting past arrears. All of that added up. Yet, this operational turnaround (AT&C Loss Reduction) has not automatically resulted in financial turnaround. Delhi Electricity Regulatory Commission (DERC) has been reluctant to raise tariff, and has disallowed significant expenditure when passing tariff orders. Between FY 2009-10 and FY 2021-22, DERC, on average, disallowed 15 per cent-18 per cent of the ARR projected by the DISCOMs (Chitnis, Nair and Singh, CSEP). This leads to a vicious cycle of litigation. Both private players have about a dozen each of tariff order disputes still sub judice. Now let us look at privatisation efforts by the Central government - UTs of Daman Diu Dadra and Nagar Haveli (DD-DNH) in 2022, and Chandigarh in 2025. Chandigarh is the first and only case where 100 per cent shares of DISCOM have been divested up front to a private player (it was 51 per cent equity in all other cases, with a provision to extend from 51 per cent to 74 per cent in case of DD-DNH based on milestone achievements that are yet to unfold). The reserve price was based on net fixed assets, and the winning bidder (CESC), outbidding six others, ended up paying a whopping five times that amount! And then comes the twist in the tale. The bid was floated in the market on 10th November 2020, and the private player could take over the government-owned DISCOM only by 1st February 2025 - after a delay of 4 years – all because of litigation by staff of erstwhile utility protesting against privatisation. The question staring states like UP who have recently initiated DISCOM privatisation process is: why do they want to privatize (no, it cannot be 'all of the above')? The why should then determine how to go about it – the bidding process, the labour unions, the works. The how (e.g., bid parameter, bidder eligibility, opening balance sheet, post takeover treatment of various financial and operational aspects, etc.) will follow once there is absolute clarity on why. (Shalabh Srivastava is Visiting Senior Fellow at the Centre for Social and Economic Progress (CSEP). Views are personal.)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store