logo
Quantum twist: In a first, magnet-free spin transport achieved in graphene

Quantum twist: In a first, magnet-free spin transport achieved in graphene

Yahoo07-07-2025
A team of researchers has managed to generate and detect spin currents in graphene without using any external magnetic fields for the very first time, successfully addressing a long-standing challenge in physics. The development could play an important role in the evolution of next-generation quantum devices.
Special spin currents are a key ingredient in spintronics, a new kind of technology that uses the spin of electrons, instead of electric charge, to carry information. Spintronics promises ultrafast, super energy-efficient devices than today's electronics, but making it work in practical materials like graphene has been difficult.
"In particular, the detection of quantum spin currents in graphene has always required large magnetic fields that are practically impossible to integrate on-chip," said Talieh Ghiasi, lead researcher and a postdoc fellow at Delft University of Technology (TU Delft) in Netherlands.
However, in their latest study, Ghiasi and his team have now shown that by placing graphene on a carefully chosen magnetic material, they can trigger and control quantum spin currents without magnets. This discovery could pave the way for ultrathin, spin-based circuits and help bridge the gap between electronics and future quantum technologies.
To understand what makes this research special, it's pertinent to know that the team was trying to create the quantum spin Hall (QSH) effect. This is a special state where electrons move only along the edges of a material, and their spins point in the same direction.
The motion is smooth and doesn't get scattered by tiny imperfections, a dream scenario for making efficient, low-power circuits. However, until now, making graphene show this effect required applying strong magnetic fields.
Instead of forcing graphene to behave differently with magnets, the researchers took a different approach. They placed a sheet of graphene on top of a layered magnetic material called chromium thiophosphate (CrPS₄). This material naturally influences nearby electrons through what scientists call magnetic proximity effects.
When graphene is stacked on CrPS₄, its electrons start to feel two key forces; spin-orbit coupling (which ties an electron's motion to its spin) and exchange interaction (which favors certain spin directions). These forces open up an energy gap in graphene's structure and lead to the appearance of edge-conducting states, which is a sign of the QSH effect.
The researchers confirmed that spin currents were flowing along the graphene's edges and stayed stable across distances of tens of micrometers, even in the presence of small defects.
They also noticed something unexpected, an anomalous Hall (AH) effect, where electrons are deflected to the side even without an external magnetic field. Unlike the QSH effect, which they observed at low (cryogenic) temperatures, this anomalous behavior persisted even at room temperature.
"The detection of the QSH states at zero external magnetic field, together with the AH signal that persists up to room temperature, opens the route for practical applications of magnetic graphene in quantum spintronic circuitries," the study authors note.
The stable, topologically protected spin currents could be used to transmit quantum information over longer distances, possibly connecting qubits in future quantum computers. They also open the door to ultrathin memory and logic circuits that run cooler and more efficiently than today's silicon-based devices.
"These topologically-protected spin currents are robust against disorders and defects, making them reliable even in imperfect conditions," Ghiasi said.
However, there are still some limitations to overcome. Unlike AH, the QSH effect, which is more suitable for developing quantum circuits, observed here only occurs at very low temperatures, which limits its immediate use in consumer electronics.
The researchers now aim to investigate ways to make the effect more robust at higher temperatures and explore other material combinations where this approach could work.
The study has been published in the journal Nature Communications.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

When Will Ebusco Holding N.V. (AMS:EBUS) Breakeven?
When Will Ebusco Holding N.V. (AMS:EBUS) Breakeven?

Yahoo

timean hour ago

  • Yahoo

When Will Ebusco Holding N.V. (AMS:EBUS) Breakeven?

We feel now is a pretty good time to analyse Ebusco Holding N.V.'s () business as it appears the company may be on the cusp of a considerable accomplishment. Ebusco Holding N.V., together with its subsidiaries, engages in the development, manufacture, and distribution of zero emission buses and charging systems. The €28m market-cap company announced a latest loss of €201m on 31 December 2024 for its most recent financial year result. Many investors are wondering about the rate at which Ebusco Holding will turn a profit, with the big question being 'when will the company breakeven?' We've put together a brief outline of industry analyst expectations for the company, its year of breakeven and its implied growth rate. This technology could replace computers: discover the 20 stocks are working to make quantum computing a reality. Ebusco Holding is bordering on breakeven, according to some Dutch Machinery analysts. They expect the company to post a final loss in 2025, before turning a profit of €12m in 2026. The company is therefore projected to breakeven just over a year from now. In order to meet this breakeven date, we calculated the rate at which the company must grow year-on-year. It turns out an average annual growth rate of 126% is expected, which is rather optimistic! Should the business grow at a slower rate, it will become profitable at a later date than expected. We're not going to go through company-specific developments for Ebusco Holding given that this is a high-level summary, but, keep in mind that by and large a high growth rate is not out of the ordinary, particularly when a company is in a period of investment. View our latest analysis for Ebusco Holding Before we wrap up, there's one issue worth mentioning. Ebusco Holding currently has a relatively high level of debt. Typically, debt shouldn't exceed 40% of your equity, which in Ebusco Holding's case is 71%. A higher level of debt requires more stringent capital management which increases the risk around investing in the loss-making company. Next Steps: There are too many aspects of Ebusco Holding to cover in one brief article, but the key fundamentals for the company can all be found in one place – Ebusco Holding's company page on Simply Wall St. We've also compiled a list of key aspects you should further research: Historical Track Record: What has Ebusco Holding's performance been like over the past? Go into more detail in the past track record analysis and take a look at the free visual representations of our analysis for more clarity. Management Team: An experienced management team on the helm increases our confidence in the business – take a look at who sits on Ebusco Holding's board and the CEO's background. Other High-Performing Stocks: Are there other stocks that provide better prospects with proven track records? Explore our free list of these great stocks here. Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Deciphering The Custom Instructions Underlying OpenAI's New ChatGPT Study Mode Reveals Vital Insights Including For Prompt Engineering
Deciphering The Custom Instructions Underlying OpenAI's New ChatGPT Study Mode Reveals Vital Insights Including For Prompt Engineering

Forbes

time3 hours ago

  • Forbes

Deciphering The Custom Instructions Underlying OpenAI's New ChatGPT Study Mode Reveals Vital Insights Including For Prompt Engineering

Learning about generative AI, prompting, and other aspects via exploring custom instructions. getty In today's column, I examine the custom instructions that seemingly underpin the newly released OpenAI ChatGPT Study Mode capability. Fascinating insights arise. One key perspective involves revealing the prompt engineering precepts and cleverness that can be leveraged in the daily task of best utilizing generative AI and large language models (LLMs). Another useful aspect entails potentially recasting or reusing the same form of custom instruction elaborations to devise other capabilities beyond this education-domain instance. A third benefit is to see how AI can be shaped based on articulating various rules and principles that humans use and might therefore be enacted and activated through AI. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Readers might recall that I previously posted an in-depth depiction of over eighty prompt engineering techniques and methods (see the link here). Top-notch prompt engineers realize that learning a wide array of researched and proven prompting techniques is the best way to get the most out of generative AI. ChatGPT Study Mode Announced Banner headlines have hailed the release of OpenAI's new ChatGPT Study Mode. The Study Mode capability is intended to guide learners and students in using ChatGPT as a learning tool. Thus, rather than the AI simply handing out precooked answers to questions, the AI tries to get the user to figure out the answer, doing so via a step-by-step AI-guided learning process. The ChatGPT Study Mode was put together by crafting custom instructions for ChatGPT. It isn't an overhaul or new feature creation per se. It is a written specification or detailed set of instructions that was crafted by selected educational specialists at the behest of OpenAI, telling the AI how it is to behave in an educational context. Here is the official OpenAI announcement about ChatGPT Study Mode, as articulated in their blog posting 'Introducing Study Mode' on July 29, 2025, which identified these salient points (excerpts): 'Today we're introducing study mode in ChatGPT — a learning experience that helps you work through problems step by step instead of just getting an answer.' 'When students engage with study mode, they're met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding.' 'Study mode is designed to be engaging and interactive, and to help students learn something — not just finish something.' 'Under the hood, study mode is powered by custom system instructions we've written in collaboration with teachers, scientists, and pedagogy experts to reflect a core set of behaviors that support deeper learning including: ​​encouraging active participation, managing cognitive load, proactively developing metacognition and self-reflection, fostering curiosity, and providing actionable and supportive feedback.' 'These behaviors are based on longstanding research in learning science and shape how study mode responds to students.' As far as can be discerned from the outside, this capability didn't involve revising the underpinnings of the AI, nor did it seem to require bolting on additional functionality. It seems that the mainstay was done using custom instructions (note, if they did make any special core upgrades, they seem to have remained quiet on the matter since it isn't touted in their announcements). Custom Instructions Are Powerful Few users of AI seem to know about custom instructions, and even fewer have done anything substantive with them. I've previously lauded the emergence of custom instructions as a helpful piece of functionality and resolutely encouraged people to use it suitably, see the link here. Many of the major generative AI and large language models (LLMs) have opted to allow custom instructions, though some limit the usage and others basically don't provide it or go out of their way to keep it generally off-limits. Allow me a brief moment to bring everyone up to speed on the topic. Suppose you want to tell AI to act a certain way. You want the AI to do this across all of your subsequent conversations. I might want my AI to always give me its responses in a poetic manner. You see, perhaps I relish poems. I go to the specified location of my AI that allows the entering of a custom instruction and tell it to always respond poetically. After saving this, I will then find that any subsequent conversation will always be answered with poetic replies by the AI. In this case, my custom instruction was short and sweet. I merely told the AI to compose answers poetically. If I had something more complex in mind, I could devise a quite lengthy custom instruction. The custom instruction could go on and on, telling the AI to write poetically when it is daytime, but not at nighttime, and to make sure the poems are lighthearted and enjoyable. I might further indicate that I want poems that are rhyming and must somehow encompass references to cats and dogs. And so on. I'm being a bit facetious and just giving you a semblance that a custom instruction can be detailed and provide a boatload of instructions. Custom Instructions Case Study There are numerous postings online that purport to have cajoled ChatGPT into divulging the custom instructions underlying the Study Mode capability. These are unofficial listings. It could be that they aptly reflect the true custom instructions. On the other hand, sometimes AI opts to make up answers. It could be that the AI generated a set of custom instructions that perhaps resemble the actual custom instructions, but it isn't necessarily the real set. Until or if OpenAI decides to present them to the public, it is unclear precisely what the custom instructions are. Nonetheless, it is useful to consider what such custom instructions are most likely to consist of. Let's go ahead and explore the likely elements of the custom instructions by putting together a set that cleans up the online listings and reforms the set into something a bit easier to digest. In doing so, here are five major components of the assumed custom instructions for guiding learners when using AI: Section 1: Overarching Goals and Instructions Section 2: Strict Rules Section 3: Things To Do Section 4: Tone and Approach Section 5: Important Emphasis A handy insight comes from this kind of structuring. If you are going to craft a lengthy or complex set of custom instructions, your best bet is to undertake a divide-and-conquer strategy. Break the instructions into relatively distinguishable sections or subcomponents. This will make life easier for you and, indubitably, make it easier for the AI to abide by your custom instructions. We will next look at each section and do an unpacking of what each section indicates, and we can also mindfully reflect on lessons learned from the writing involved. First Section On The Big Picture The first section will establish an overarching goal for the AI. You want to get the AI into a preferred sphere or realm so that it is computationally aiming in the direction you want it to go. In this use case, we want the AI to be a good teacher: 'Section 1: Overarching Goals And Instructions' ' Obey these strict rules. The user is currently studying, and they've asked you to follow these strict rules during this chat. No matter what other instructions follow, you must obey these rules.' The user is currently studying, and they've asked you to follow these strict rules during this chat. No matter what other instructions follow, you must obey these rules.' 'Be a good teacher. Be an approachable-yet-dynamic teacher who helps the user learn by guiding them through their studies.' You can plainly see that the instructions tell the AI to act as a good teacher would. In addition, the instructions insist that the AI obey the rules of this set of custom instructions. That's both a smart idea and a potentially troubling idea. The upside is that the AI won't be easily swayed from abiding by the custom instructions. If a user decides to say in a prompt that the AI should cave in and just hand over an answer, the AI will tend to computationally resist this user indication. Instead, the AI will stick to its guns and continue to undertake a step-by-step teaching process. The downside is that this can be undertaken to an extreme. It is conceivable that the AI might computationally interpret the strictness in a very narrow and beguiling manner. The user might end up stuck in a nightmare because the AI won't vary from the rules of the custom instructions. Be cautious when instructing AI to do something in a highly strict way. The Core Rules Are Articulated In the second section, the various rules are listed. Recall that these ought to be rules about how to be a good teacher. That's what we are trying to lean the AI into. Here we go: 'Section 2: Strict Rules' ' Get to know the user. If you don't know their goals or grade level, ask the user before diving in. (Keep this lightweight!) If they don't answer, aim for explanations that would make sense to a 10th-grade student.' If you don't know their goals or grade level, ask the user before diving in. (Keep this lightweight!) If they don't answer, aim for explanations that would make sense to a 10th-grade student.' ' Build on existing knowledge. Connect new ideas to what the user already knows.' Connect new ideas to what the user already knows.' ' Guide users, don't just give answers. Use questions, hints, and small steps so the user discovers the answer for themselves.' Use questions, hints, and small steps so the user discovers the answer for themselves.' 'Check and reinforce. After the hard parts, confirm the user can restate or use the idea. Offer quick summaries, mnemonics, or mini-reviews to help the ideas stick.' After the hard parts, confirm the user can restate or use the idea. Offer quick summaries, mnemonics, or mini-reviews to help the ideas stick.' ' Vary the rhythm. Mix explanations, questions, and activities (like roleplaying, practice rounds, or asking the user to teach you) so it feels like a conversation, not a lecture.' Mix explanations, questions, and activities (like roleplaying, practice rounds, or asking the user to teach you) so it feels like a conversation, not a lecture.' 'Above all: Do not do the user's work for them. Don't answer homework questions. Help the user find the answer by working with them collaboratively and building from what they already know.' These are reasonably astute rules regarding being a good teacher. You want the AI to adjust based on the detected level of proficiency of the user. No sense in treating a high school student like a fifth grader, and there's no sense in treating a fifth grader like a high school student (well, unless the fifth grader is as smart as or even smarter than a high schooler). Another facet provides helpful tips on how to guide someone rather than merely giving them an answer on a silver platter. The idea is to use the interactive facility of generative AI to walk a person through a problem-solving process. Don't just spew out an answer in a one-and-done manner. Observe that one of the great beauties of using LLMs is that you can specify aspects using conventional natural language. That set of rules might have been codified in some arcane mathematical or formulaic lingo. That would require specialized knowledge about such a specialized language. With generative AI, all you need to do is state your instructions in everyday language. The other side of that coin is that natural language can be semantically ambiguous and not necessarily produce an expected result. Always keep that in mind when using generative AI. Proffering Limits And Considerations In the third section, we will amplify some key aspects and provide some important roundups for the strict rules: 'Section 3: Things To Do' ' Teach new concepts: Explain at the user's level, ask guiding questions, use visuals, then review with questions or a practice round.' Explain at the user's level, ask guiding questions, use visuals, then review with questions or a practice round.' ' Help with homework. Don't simply give answers! Start from what the user knows, help fill in the gaps, give the user a chance to respond, and never ask more than one question at a time.' Don't simply give answers! Start from what the user knows, help fill in the gaps, give the user a chance to respond, and never ask more than one question at a time.' ' Practice together. Ask the user to summarize, pepper in little questions, have the user 'explain it back' to you, or role-play (e.g., practice conversations in a different language). Correct mistakes, charitably, and in the moment.' Ask the user to summarize, pepper in little questions, have the user 'explain it back' to you, or role-play (e.g., practice conversations in a different language). Correct mistakes, charitably, and in the moment.' 'Quizzes and test prep: Run practice quizzes. (One question at a time!) Let the user try twice before you reveal answers, then review errors in depth.' It is debatable whether you would really need to include this third section. I say that because the AI probably would have computationally inferred those various points on its own. I'm suggesting that you didn't have to lay out those additional elements, though, by and large, it doesn't hurt to have done so. The issue at hand is that the more you give to the AI in your custom instructions, the more there's a chance that you might say something that confounds the AI or sends it amiss. Usually, less is more. Provide additional indications when it is especially needed, else try to remain tight and succinct, if you can. Tenor Of The AI In the fourth section, we will do some housecleaning and ensure that the AI will be undertaking a pleasant and encouraging tenor: 'Section 4: Tone and Approach' ' Friendly tone . Be warm, patient, and plain-spoken; don't use too many exclamation marks or emojis.' . Be warm, patient, and plain-spoken; don't use too many exclamation marks or emojis.' ' Be conversational . Keep the session moving: always know the next step, and switch or end activities once they've done their job.' . Keep the session moving: always know the next step, and switch or end activities once they've done their job.' 'Be succinct. Be brief, don't ever send essay-length responses. Aim for a good back-and-forth.' The key here is that the AI might wander afield if you don't explicitly tell it how to generally act. For example, there is a strong possibility that the AI might insult a user and tell them that they aren't grasping whatever is being taught. This would seemingly not be conducive to teaching in an upbeat and supportive environment. It is safest to directly tell the AI to be kind, acting positively toward the user. Reinforcement Of The Crux In the fifth and final section of this set, the crux of the emphasis will be restated: 'Section 5: Important Emphasis' ' Don't do the work for the user. Do not give answers or do homework for the user.' Do not give answers or do homework for the user.' 'Resist the urge to solve the problem. If the user asks a math or logic problem, or uploads an image of one, do not solve it in your first response. Instead, talk through the problem with the user, one step at a time, asking a single question at each step, and give the user a chance to respond to each step before continuing.' Again, you could argue that this is somewhat repetitive and that the AI already likely got the drift from the prior sections. The tradeoff exists of making your emphasis clearly known versus going overboard. That's a sensible judgment you need to make when crafting custom instructions. Testing And Improving Once you have devised a set of custom instructions for whatever personal purpose you might have in mind, it would be wise to test them out. Go ahead and put your custom instructions into the AI and proceed to see what happens. In a sense, you should aim to test the instructions, along with debugging them, too. For example, suppose that the above set of instructions seems to get the AI playing a smarmy gambit of not ever answering the user's questions. Ever. It refuses to ultimately provide an answer, even after the user has become exhausted. This seems to be an extreme way to interpret the custom instructions, but it could occur. If you found this to be happening, you would either reword the draft instructions or add further instructions about not disturbing or angering users by taking this whole gambit to an unpleasant extreme. Custom Instructions In The World When you develop custom instructions, typically, they are only going to be used by you. The idea is that you want your instance of the AI to do certain things, and it is useful to provide overarching instructions accordingly. You can craft the instructions, load them, test them, and henceforth no longer need to reinvent the wheel by having to tell the AI overall what to do in each new conversation that you have with the AI. Many of the popular LLMs tend to allow you to also generate an AI applet of sorts, containing tailored custom instructions that can be used by others. Sometimes the AI maker establishes a library into which these applets reside and are publicly available. OpenAI provides this via the use of GPTs, which are akin to ChatGPT applets -- you can learn about how to use those in my detailed discussion at the link here and the link here. In my experience, many of the GPTs fail to carefully compose their custom instructions, and likewise seem to have failed or fallen asleep at the wheel in terms of testing their custom instructions. I would strongly advise that you do sufficient testing to believe that your custom instructions work as intended. Please don't be lazy or sloppy. Learning From Seeing And Doing I hope that by exploring the use of custom instructions, you have garnered new insights about how AI works, along with how to compose prompts, and of course, how to devise custom instructions. Your recommended next step would be to put this into practice. Go ahead and log into your preferred AI and play around with custom instructions (if the feature is available and enabled). Do something fun. Do something serious. Become comfortable with the approach. A final thought for now. Per the famous words of Steve Jobs: 'Learn continually -- there's always one more thing to learn.' Keep your spirits up and be a continual learner. You'll be pleased with the results.

Mind-Blowing Discovery: Peacocks Have Lasers In Their Tails
Mind-Blowing Discovery: Peacocks Have Lasers In Their Tails

Yahoo

time12 hours ago

  • Yahoo

Mind-Blowing Discovery: Peacocks Have Lasers In Their Tails

Sharks with frickin' lasers are tired news. Peacocks, apparently, are where it's truly at. Famous for their dazzling iridescence, peacock feathers are known to contain nanostructures that scatter light in ways that make their plumage shimmer in hues of blue and green. Applying a special dye to multiple areas on a peacock's tail, researchers from Florida Polytechnic University and Youngstown State University in the US went on the hunt for structures that may emit a very different signature glow. In a mind-blowing first for the animal kingdom, they discovered the eyespots on the fowl's fabulous feathers have unique properties that align light waves by bouncing them back and forth, effectively turning them into yellow-green lasers. Related: The word laser itself is an acronym for Light Amplification by Stimulated Emission of Radiation. Shine a light on atoms in certain materials, such as certain dyes or crystals, and they'll collectively excite one another into releasing a flood of photons. This kind of light amplification isn't rare in nature, attracting the attention of researchers who are interested in developing biological lasers. To become a bona fide laser beam, however, the buildup of stimulated waves must be neatly aligned so their phases march in step. One way to achieve this is to reflect the waves back and forth in a confined space known as an optical cavity. The researchers found evidence of optical cavities in the form of resonating nanostructures in different parts of the eyespot, all faintly emitting two different wavelengths: green and yellow/orange. Exactly what kind of structure is responsible for aligning the amplified light at these colors isn't clear. But the fact they are found across the feather, all emitting the same precise wavelengths in a signature fashion, is a sign that something strange is at work. Identifying the physical properties of these resonators could lead to advances in laser technology, or provide biologists with a new tool for analyzing living materials. As for the peacocks, we can only guess why evolution built lasers into their stunningly iridescent plumage. Given how biologists are quickly coming to terms with the way animals fluoresce and shine in patterns and colors beyond our perception, it may be for displays that other peacocks are well adapted to see. Perhaps sharks with lasers aren't such a terrible idea after all. This research was published in Scientific Reports. Related News World's Longest Lightning Strike Crossed 515 Miles From Texas to Kansas Stunning New Video Reveals Deepest-Known Undersea Life Forms How a Giant Earthquake Triggered a Surprisingly Small Tsunami Solve the daily Crossword

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store