
Snowcap Compute raises $23 million for superconducting AI chips
SAN FRANCISCO :Snowcap Compute, a startup working on building artificial intelligence computing chips using superconducting technology, on Monday raised $23 million and said that the former CEO of Intel will join its board.
Snowcap aims to build computers that could one day beat today's best artificial intelligence systems, while using a fraction of the electricity. To do that, Snowcap plans to use a new kind of chip made with superconductors, which are materials that allow current to flow without electrical resistance.
Scientists understand superconductors well and have theorized about making computer chips with them since at least the 1990s, but have faced a major challenge: To work, the chips need to be kept very cold in cryogenic coolers which themselves consume a lot of electricity.
For decades that made superconductor chips a nonstarter, until AI chatbots ignited huge demand for computing power at the same time that conventional chips are hitting the limits of how much performance they can wring from every watt of power and are taxing electricity grids.
Nvidia's forthcoming "Rubin Ultra" AI data center server due in 2027, for example, is expected to consume about 600 kilowatts of power. That means operating that single server at full capacity for one hour would consume about two thirds the average power that a U.S. house uses in a month.
In that kind of changed world, dedicating a portion of a data center's power needs to cryogenic coolers makes sense if the performance gains are good enough, said Michael Lafferty, Snowcap's CEO, who formerly oversaw work on futuristic chips at Cadence Design Systems. Snowcap believes that even after accounting for energy used in cooling, its chips will be about 25 times better than today's best chips in terms of performance per watt.
"Power (efficiency) is nice, but performance sells," Lafferty said. "So we're pushing the performance level way up and pulling the power down at the same time."
Snowcap's founding team includes two scientists - Anna Herr and Quentin Herr - who have done extensive work on superconducting chips at chip industry research firm Imed and defense firm Northrop Grumman, as well as former chip executives from Nvidia and Alphabet's Google.
While the chips can be made in a standard factory, they will require an exotic metal called niobium titanium nitride that Lafferty said depends on Brazil and Canada for key ingredients. Snowcap plans its first basic chip by the end of 2026, but full systems will not come until later.
Despite the long development timeline, Pat Gelsinger, Intel's former CEO who led the investment for venture firm Playground Global and is joining Snowcap's board, said the computing industry needs a sharp break from its current trajectory of consuming ever more electricity.
"A lot of data centers today are just being limited by power availability," Gelsinger said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


International Business Times
5 hours ago
- International Business Times
Tengr.ai: A privacy-by-design generative AI platform
Generative AI (GenAI) is rapidly reshaping industries from media to medicine, although with concerns around privacy, transparency, data integrity and ethics on the rise. A recent report from Deloitte indicated heightened skepticism with over 78% of users finding it challenging to control the data collected about them. AI image generators like Midjourney, DALL-E or Stable Diffusion raise serious privacy concerns, from using personal photos without consent in training data to unintentionally recreating real faces. They've also been used to create fake identities in online scams. As the technology evolves, experts warn that safeguards, especially for vulnerable groups like children; are lagging behind. Hungary based company aims to tackle that, with its privacy-by-design creativeGenAI platform, which is used by over 500,000 users worldwide. What is is the ethical image generation infrastructure that lets users create without censorship issues or data harvesting designed for creators, businesses, educators and more with a strong emphasis on user privacy and creative freedom. The company employs its proprietary Hyperalign™ technology to balance uncensored creative expression with safety. This allows the generation of diverse content while preventing misuse, such as deepfakes or harmful imagery. "Users retain full ownership of the images they create, enabling them to use their creations for commercial purposes without restrictions," says Péter W. Szabó, CEO and co-founder of How Works Unlike competitors that harvest personal data or impose restrictive licenses, is designed with privacy at its core. It does not collect or store any personal information, and users maintain full commercial rights to all images they create. Its Hyperalign™ technology quietly converts risky prompts into safe, compliant results, avoiding the constant battle of traditional filters while maintaining seamless creative freedom. also recently announced its Quantum 3.0; an upgraded image generation engine which sets a benchmark for prompt fidelity, rendering speed, photorealism, all while retaining the existing infrastructure. "Quantum 3.0 Engine uses advanced diffusion-transformer technology to accurately interpret complex prompts, reducing image revisions by 38% and enhancing fine details like hair and typography," says Peter. The Detailer Upscaler 3.0 claims to boost images up to 8x resolution with lifelike textures, offering "Details Only" and combined upscale modes for crisp prints. Its One-ClickBackground Swap, powered by ScenaNova, claims to isolate subjects and create custom backdrops. Why privacy and personal data is important "AI image generators are raising serious privacy concerns," says Peter. From models unintentionally recreating real people's faces to fake profiles used in scams, these tools can misuse personal data in harmful ways. Lawsuits like Getty Images vs. Stability AI highlights the unauthorised use of private photos in training data. Protecting personal data isn't just about compliance, it's about respecting individual rights and preventing real-world harm and to be ethical in an increasingly digital world. introduction into Web3 Brands like Jack Wolfskin and Tesa SE are already using for product visualisation, while the company's architectural partner Zindak AI uses the platform to turn sketches and CAD renders to photorealistic imagery. is also introducing its native $TENGR utility token into its platform to enhance user engagement and expand its ecosystem. Earlier this year, completed an equity funding round aimed at developing and launching its $TENGR utility token, integrating blockchain tech into its platform. Through Web3 initiatives and a utility token, the platform aims to empower and monetise its community in a more collaborative way, ensuring that no personal data is collected or stored, and users retain full commercial rights to every image they generate with ethical solution.

Straits Times
5 hours ago
- Straits Times
Is AI cheating on the rise? Few cases reported by S'pore universities, but experts warn of risks
All six universities in Singapore generally allow students to use generative AI to varying degrees, depending on the module or coursework. PHOTO: UNSPLASH Is AI cheating on the rise? Few cases reported by S'pore universities, but experts warn of risks SINGAPORE - The number of students caught for plagiarising and passing off content generated by artificial intelligence as their own work remains low, said the public universities, following a recent case at the Nanyang Technological University (NTU). But professors here are watching closely for signs of misuse, warning that over-reliance on AI could undermine learning. Some are calling for more creative forms of assessment. Their comments follow NTU's decision to award three students zero marks for an assignment after discovering they had used gen AI tools in their work. The move drew attention after one of the students posted about it on Reddit, sparking debate about the growing role of AI in education and its impact on academic integrity. All six universities here generally allow students to use generative AI to varying degrees, depending on the module or coursework. Students are required to declare when and how they use such tools, to uphold academic integrity. In the past three years, Singapore Management University (SMU) recorded 'less than a handful' of cases of AI-related academic misconduct, it said, without giving specific numbers. Similarly, the Singapore University of Technology and Design (SUTD) has encountered a 'handful of academic integrity cases, primarily involving plagiarism' during the same time period. At Singapore University of Social Sciences (SUSS), confirmed cases of academic dishonesty involving generative AI remain low, but it has seen a 'slight uptick' in such reports, partly due to heightened faculty vigilance and use of detection tools. The other universities - National University of Singapore (NUS) and Singapore Institute of Technology (SIT) and NTU - did not respond to queries about whether more students have been caught for flouting the rules using AI. Recognising that AI technologies are here to stay, universities said they are exploring better ways to integrate such tools meaningfully and critically into learning. Gen AI refers to technologies that can produce human-like text, images, or other content based on prompts. Educational institutions worldwide have been grappling with balancing its challenges and opportunities, while maintaining academic integrity. Faculty members here have flexibility to decide how AI can be used in their courses, as long as their decisions align with university-wide policies. NUS allows AI use for take-home assignments if properly attributed, although instructors have to design complex tasks to prevent over-reliance. For modules focused on core skills, assessments may be done in person or designed to go beyond AI's capabilities. At SMU, instructors inform students which AI tools are allowed, and guide them on their use, typically for idea generation or research-heavy projects outside exams. SIT has reviewed assessments and trained staff to manage AI use, encouraging it in advanced courses like coding but restricting it in foundational ones, while SUTD has integrated Gen AI into its design thinking curriculum to foster higher-order thinking. The idea is to teach students when AI should be a tool, partner, or avoided. Universities said that students must ensure originality and credibility in their work. The allure of gen AI Students interviewed by ST, who requested to remain anonymous, said AI usage is widespread among their peers. 'Unfortunately, I think that (using generative AI) is the norm nowadays. It has become so rare to see people think on their own first before sending their assignments into ChatGPT,' said a 21-year-old fourth-year law student from SUSS. Still, most students said they have a sense of when it is appropriate to use AI and when it is not. Several said they use it mainly for brainstorming, collating research and sometimes while writing. A 20-year-old Year 4 economics student from NTU said he does not see AI as anything more than a 'really smart study buddy' that helps him clarify difficult concepts, similar to how one would consult a professor. A third-year SMU political science student, 22, said she uses AI to fix her grammar before submitting her essays, but draws the line at copying essays wholesale from ChatGPT. But some students said they would turn to AI to quickly complete general modules outside their specialisations that they feel are not worth their personal effort. AI may improve efficiency, but there is a 'level of wisdom that needs to come with that usage', said a third-year public policy and global affairs student from NTU. The 21-year-old said she would not use ChatGPT for tasks that require her personal opinion but would use it 'judiciously' to complete administrative matters. Other students said they avoid relying too much on AI, as they take pride in their work. A 23-year-old Year 3 computer science student from SUTD said he wants to remain 'self-disciplined' in his use of AI because he realised he needed to learn from his mistakes in order to improve academically. More creativity needed in testing Academics say universities must bring AI use into the open and rethink assessments to stay ahead. SMU Associate Professor of Marketing Education Seshan Ramaswami embraces AI tools, but with caveats. In recent terms, he has encouraged students to use AI, provided they submit a full account of how tools were used and critique their outputs. He also uses AI tools to create practice quizzes, and a chatbot that allows students to ask questions about his class materials. But he tells them not to 'blindly trust' its responses. The real danger lies in uncritical AI use, he added, which can weaken students' judgment, clarity in writing or personal integrity. Dr Ramaswami said he is 'going to have to be even more thoughtful about the design of course assessments and pedagogy'. He may explore methods like 'hyper-local' assignments based on Singapore-specific contexts, oral examinations to test depth of understanding, and in-class discussions where devices are put away and ideas are exchanged in real time. Even long-standing assessment formats like individual essays may need to be reconsidered, he said. Dr Thijs Willems, a research fellow at the Lee Kuan Yew Centre for Innovative Cities at SUTD, said that while essays, presentations and prototypes still matter, these are no longer the sole markers of achievement. More attention needs to be paid to the originality of ideas, the sophistication with which AI is prompted and questioned, and the human judgment used to reshape machine output into something unexpected, he said. These qualities 'surface most clearly in reflective journals, prompt logs, design diaries, spontaneous oral critiques, and peer feedback sessions', he added. SUSS Associate Professor Wang Yue, head of the Doctor of Business Administration Programme, said undergraduates should already have basic cognitive skills and foundational knowledge. 'AI frees us to focus on higher-order thinking like developing insights and exercising wisdom,' she said, adding that restricting AI would be counterproductive to preparing students for the workplace. Critical thinking needed more than ever The same speed that makes AI exciting is also its potential hazard, said Dr Willems, warning that learners who treat it as a 'one-click answer engine' risk accepting mediocre work and weakening their own understanding. The key is to focus on the quality of human and AI interaction, he said. 'Once learners adopt the stance of investigators of their own practice, their critical engagement with both technology and subject matter deepens.' Dr Jean Liu, director at the Centre for Evidence and Implementation and adjunct assistant professor at the NUS Yong Loo Lin School of Medicine, said that while AI offers major advantages for learning, universities must clearly define the line between acceptable use and academic dishonesty. 'AI can act as a tutor who provides personalised explanations and feedback… or function as an experienced mentor or thought partner for projects,' she said. But the line is drawn when students allow AI to do the work wholesale. 'In an earlier generation, a student might pay a ghost writer to complete an essay,' Dr Liu said. 'Submitting a ChatGPT essay falls into the same category and should be banned.' 'In general, it's best practice to come to an AI platform with ideas on the table, not to have AI do all the work. Helping students find this balance should be a key goal of educators.' Universities must be upfront about what kinds of AI use are acceptable for students, and provide clearer guidance, she added. Dr Jason Tan, associate professor for policy, curriculum, and leadership at the National Institute of Education, said the rise of AI is testing students' integrity and sense of responsibility. Over-reliance on AI tools could also erode critical thinking, he added. 'Students have to decide for themselves what they want to get out of their university education,' he said. Join ST's WhatsApp Channel and get the latest news and must-reads.


CNA
6 hours ago
- CNA
US Senate Republicans aim to push ahead on Trump's sweeping tax-cut, spending Bill
WASHINGTON: US Senate Republicans will seek to push President Donald Trump's sweeping tax-cut and spending Bill forward on Saturday (Jun 28) with a procedural vote that could kick off a marathon weekend session. The Bill would extend the 2017 tax cuts that were Trump's main first-term legislative achievement, cut other taxes and boost spending on the military and border security. Nonpartisan analysts estimate a version passed by the House of Representatives last month would add about US$3 trillion to the nation's US$36.2 trillion government debt. Senate Republicans have been deeply divided over plans to partly offset that Bill's heavy hit to the deficit, including by cutting the Medicaid health insurance program for low-income Americans. Republicans are using a legislative manoeuvre to bypass the Senate's 60-vote threshold to advance most legislation in the 100-member chamber. Their narrow margins in the Senate and House mean they can afford no more than three Republican no votes to advance a Bill that Democrats are united in opposing, saying it takes a heavy toll on low- and middle-income Americans to benefit the wealthy. Trump has pushed for Congress to pass the bill by the Jul 4 Independence Day holiday. The White House said early this month that the legislation, which Trump calls the "One Big Beautiful Bill", would reduce the annual deficit by US$1.4 trillion. While a handful of Republicans in both chambers have voiced opposition to some of the Bill's elements, this Congress has so far not rejected any of the president's legislative priorities. A successful vote to open debate would kick off a lengthy process that could run into Sunday, as Democrats unveil a series of amendments that are unlikely to pass in a chamber Republicans control 53-47. TAX BREAKS, SPENDING CUTS Democrats will focus their firepower with amendments aimed at reversing Republican spending cuts to programs that provide government-backed healthcare to the elderly, poor and disabled, as well as food aid to low-income families. Senate Democratic Leader Chuck Schumer summarised the reasons for his party's opposition to the Bill at a Friday press conference by saying "it has the biggest cuts to food funding ever", and could result in more than 2 million people losing their jobs. He also highlighted the Republican rollback of clean energy initiatives ushered in by the Biden administration. Republican Senate Majority Leader John Thune stressed the tax-cut components during a Friday speech to the Senate. "The centrepiece of our Bill is permanent tax relief for the American people," he said as he showcased legislation that contains a new tax break for senior citizens and other taxpayers. The measure, Thune said, will "help get our economy firing on all cylinders again". It would also raise the Treasury Department's statutory borrowing limit by trillions of dollars to stave off a first default on its debt in the coming months. If the Senate manages to pass Trump's top legislative goal by early next week, the House would be poised to quickly apply the final stamp of approval, sending it to Trump for signing into law. But with Senate Republicans struggling to find enough spending cuts to win the support of the party's far right, Trump on Friday loosened the leash a bit, saying his Jul 4 deadline for wrapping it all up was "important" but "it's not the end-all". Among the most difficult disagreements Senate Republicans struggled to resolve late on Friday was the size of a cap on deductions for state and local taxes and a Medicaid cost-saving that could hobble rural hospitals.