logo
Polkadot Joins BitBasel's 'Art for Impact' Program to Send Artwork to the Moon

Polkadot Joins BitBasel's 'Art for Impact' Program to Send Artwork to the Moon

Global call for artists: Submit work inspired by the 17 United Nations Sustainable Development Goals to be preserved on the lunar surface
Miami, FL – April 29, 2025 – Polkadot has joined BitBasel's Art for Impact Space Program, a visionary initiative inviting artists to create work inspired by the 17 United Nations Sustainable Development Goals (SDGs) for inclusion in a permanent art archive headed to the surface of the Moon.
This one-of-a-kind program is part of the GLPH (Galactic Library Preserve Humanity) archival payload, set to launch on the upcoming Astrobotic Griffin-1 lunar mission under NASA's Commercial Lunar Payload Services (CLPS) program. Using Nanofiche, a space-grade technology designed to endure the extreme conditions of space, selected artworks will be preserved for billions of years on the Moon as a testament to human creativity, purpose, and legacy.
Polkadot, one of Web3's most powerful and sustainable blockchain ecosystems, joins BitBasel in rallying creators around the world to contribute visual works that reflect on one or more of the 17 SDGs, including climate action, quality education, gender equality, clean energy, and sustainable innovation.
Submissions must be representable as still imagery and may include digital art, photography, illustration, painting, or mixed media. Artists are encouraged to create pieces that not only showcase their talent but also spark global awareness and inspire future generations.
A curation committee composed of representatives from BitBasel, Polkadot, GLPH, and the University of Florida Blockchain Lab will evaluate submissions and select the final group of artworks to be included in the Moon-bound archive. Selected artists will be formally recognized as part of this historic cultural and space mission.
'This is a once-in-a-lifetime opportunity for artists to etch their values into history—literally,' said Scott Spiegel, Co-founder & CEO of BitBasel. 'With Polkadot's support, we're expanding the reach of this mission across the Web3 ecosystem and beyond. The Moon is the canvas, and together we're sending a message that art can drive awareness, action, and legacy.'
The submission deadline is May 9, 2025, and the selected works will launch aboard a SpaceX rocket later this year.
To learn more and submit your artwork, visit form.typeform.com/to/EiS1sHtG
For media inquiries, please contact Jonathan Duran at Jonathan(at)Distractive(dot)xyz
###
About BitBasel
BitBasel is a pioneering platform at the intersection of art, blockchain, and emerging technology. Founded in Miami in 2020, BitBasel empowers artists, collectors, and technologists through immersive experiences, curated exhibitions, and digital marketplaces. With a mission to 'Pioneer the Future of the Arts,' BitBasel has launched historic initiatives—from lunar art missions to global Web3 education programs.
About Polkadot
Polkadot is the powerful, secure core of Web3, providing a shared foundation that unites some of the world's most transformative apps and blockchains. Polkadot offers advanced modular architecture that allows devs to easily design and build their own specialized blockchain projects, pooled security that ensures the same high standard for secure block production across all connected chains and apps connected to it, and robust governance that ensures a transparent system where everyone has say in shaping the blockchain ecosystem for growth and sustainability. With Polkadot, users are not just participants, they're co-creators with the power to shape its future.
About GLPH
The Galactic Library Preserve Humanity (GLPH) is a groundbreaking archival initiative using Nanofiche technology to preserve humanity's cultural legacy on the Moon for billions of years.
About University of Florida Blockchain Lab
The University of Florida Blockchain Lab is an academic research center advancing the development and application of blockchain technology through education, innovation, and interdisciplinary collaboration.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Bybit Megadrop Phase 7 Project COA Breaks Record with 50 Million USDT Products Sold Out Within 4 Hours
Bybit Megadrop Phase 7 Project COA Breaks Record with 50 Million USDT Products Sold Out Within 4 Hours

Business Insider

time20 hours ago

  • Business Insider

Bybit Megadrop Phase 7 Project COA Breaks Record with 50 Million USDT Products Sold Out Within 4 Hours

Dubai, United Arab Emirates, July 19th, 2025, Chainwire Bybit, the world's second-largest cryptocurrency exchange by trading volume, announced that its Megadrop Phase 7 project featuring COA tokens has achieved record-breaking participation, with users staking over $100 million USDT within the first 14 hours of launch. After COA (Alliance Games) went live on Bybit Megadrop at 12PM UTC on July 18, the associated 30-day USDT Earn products was completely unlocked within four hours, while the 14-day pool sold out in the next 10 hours. The total staked amount was valued at 105 million USDT. COA is the native token of the decentralized network Alliance Games, which integrates AI-driven game creation, blockchain-integrated multiplayer networks, and a distributed work node system. The COA token powers the entire ecosystem: holders who are developers can use it to access infrastructure, node operators can earn rewards, and users can stake, govern, and unlock advanced features. Bybit Megadrop provides a structured and educational approach with dual rewards. Participants not only receive their regular APR returns from savings plans but also gain shares of new token airdrops proportionate to their subscription amount and based on completion of optional educational tasks. Key Features Airdrops Made Easy: Simplified process that makes earning new tokens accessible to users of all experience levels Maximizing Rewards: Participants can dramatically increase their airdrop allocations through simple engagement tasks Risk-free participation: Users leverage existing assets through Fixed-Term Savings plans rather than purchasing new tokens Dual reward structure: Earn both regular APR and free token airdrops simultaneously Pre-market access: Obtain tokens before they're listed on Bybit Spot #Bybit / #TheCryptoArk About Bybit Bybit is the world's second-largest cryptocurrency exchange by trading volume, serving a global community of over 70 million users. Founded in 2018, Bybit is redefining openness in the decentralized world by creating a simpler, open, and equal ecosystem for everyone. With a strong focus on Web3, Bybit partners strategically with leading blockchain protocols to provide robust infrastructure and drive on-chain innovation. Renowned for its secure custody, diverse marketplaces, intuitive user experience, and advanced blockchain tools, Bybit bridges the gap between TradFi and DeFi, empowering builders, creators, and enthusiasts to unlock the full potential of Web3. Discover the future of decentralized finance at Contact Tony Au Bybit

AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines
AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines

Scientific American

time2 days ago

  • Scientific American

AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines

There are many ways to test the intelligence of an artificial intelligence —conversational fluidity, reading comprehension or mind-bendingly difficult physics. But some of the tests that are most likely to stump AIs are ones that humans find relatively easy, even entertaining. Though AIs increasingly excel at tasks that require high levels of human expertise, this does not mean that they are close to attaining artificial general intelligence, or AGI. AGI requires that an AI can take a very small amount of information and use it to generalize and adapt to highly novel situations. This ability, which is the basis for human learning, remains challenging for AIs. One test designed to evaluate an AI's ability to generalize is the Abstraction and Reasoning Corpus, or ARC: a collection of tiny, colored-grid puzzles that ask a solver to deduce a hidden rule and then apply it to a new grid. Developed by AI researcher François Chollet in 2019, it became the basis of the ARC Prize Foundation, a nonprofit program that administers the test—now an industry benchmark used by all major AI models. The organization also develops new tests and has been routinely using two (ARC-AGI-1 and its more challenging successor ARC-AGI-2). This week the foundation is launching ARC-AGI-3, which is specifically designed for testing AI agents—and is based on making them play video games. Scientific American spoke to ARC Prize Foundation president, AI researcher and entrepreneur Greg Kamradt to understand how these tests evaluate AIs, what they tell us about the potential for AGI and why they are often challenging for deep-learning models even though many humans tend to find them relatively easy. Links to try the tests are at the end of the article. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. [ An edited transcript of the interview follows. ] What definition of intelligence is measured by ARC-AGI-1? Our definition of intelligence is your ability to learn new things. We already know that AI can win at chess. We know they can beat Go. But those models cannot generalize to new domains; they can't go and learn English. So what François Chollet made was a benchmark called ARC-AGI—it teaches you a mini skill in the question, and then it asks you to demonstrate that mini skill. We're basically teaching something and asking you to repeat the skill that you just learned. So the test measures a model's ability to learn within a narrow domain. But our claim is that it does not measure AGI because it's still in a scoped domain [in which learning applies to only a limited area]. It measures that an AI can generalize, but we do not claim this is AGI. How are you defining AGI here? There are two ways I look at it. The first is more tech-forward, which is 'Can an artificial system match the learning efficiency of a human?' Now what I mean by that is after humans are born, they learn a lot outside their training data. In fact, they don't really have training data, other than a few evolutionary priors. So we learn how to speak English, we learn how to drive a car, and we learn how to ride a bike—all these things outside our training data. That's called generalization. When you can do things outside of what you've been trained on now, we define that as intelligence. Now, an alternative definition of AGI that we use is when we can no longer come up with problems that humans can do and AI cannot—that's when we have AGI. That's an observational definition. The flip side is also true, which is as long as the ARC Prize or humanity in general can still find problems that humans can do but AI cannot, then we do not have AGI. One of the key factors about François Chollet's benchmark... is that we test humans on them, and the average human can do these tasks and these problems, but AI still has a really hard time with it. The reason that's so interesting is that some advanced AIs, such as Grok, can pass any graduate-level exam or do all these crazy things, but that's spiky intelligence. It still doesn't have the generalization power of a human. And that's what this benchmark shows. How do your benchmarks differ from those used by other organizations? One of the things that differentiates us is that we require that our benchmark to be solvable by humans. That's in opposition to other benchmarks, where they do 'Ph.D.-plus-plus' problems. I don't need to be told that AI is smarter than me—I already know that OpenAI's o3 can do a lot of things better than me, but it doesn't have a human's power to generalize. That's what we measure on, so we need to test humans. We actually tested 400 people on ARC-AGI-2. We got them in a room, we gave them computers, we did demographic screening, and then gave them the test. The average person scored 66 percent on ARC-AGI-2. Collectively, though, the aggregated responses of five to 10 people will contain the correct answers to all the questions on the ARC2. What makes this test hard for AI and relatively easy for humans? There are two things. Humans are incredibly sample-efficient with their learning, meaning they can look at a problem and with maybe one or two examples, they can pick up the mini skill or transformation and they can go and do it. The algorithm that's running in a human's head is orders of magnitude better and more efficient than what we're seeing with AI right now. What is the difference between ARC-AGI-1 and ARC-AGI-2? So ARC-AGI-1, François Chollet made that himself. It was about 1,000 tasks. That was in 2019. He basically did the minimum viable version in order to measure generalization, and it held for five years because deep learning couldn't touch it at all. It wasn't even getting close. Then reasoning models that came out in 2024, by OpenAI, started making progress on it, which showed a step-level change in what AI could do. Then, when we went to ARC-AGI-2, we went a little bit further down the rabbit hole in regard to what humans can do and AI cannot. It requires a little bit more planning for each task. So instead of getting solved within five seconds, humans may be able to do it in a minute or two. There are more complicated rules, and the grids are larger, so you have to be more precise with your answer, but it's the same concept, more or less.... We are now launching a developer preview for ARC-AGI-3, and that's completely departing from this format. The new format will actually be interactive. So think of it more as an agent benchmark. How will ARC-AGI-3 test agents differently compared with previous tests? If you think about everyday life, it's rare that we have a stateless decision. When I say stateless, I mean just a question and an answer. Right now all benchmarks are more or less stateless benchmarks. If you ask a language model a question, it gives you a single answer. There's a lot that you cannot test with a stateless benchmark. You cannot test planning. You cannot test exploration. You cannot test intuiting about your environment or the goals that come with that. So we're making 100 novel video games that we will use to test humans to make sure that humans can do them because that's the basis for our benchmark. And then we're going to drop AIs into these video games and see if they can understand this environment that they've never seen beforehand. To date, with our internal testing, we haven't had a single AI be able to beat even one level of one of the games. Can you describe the video games here? Each 'environment,' or video game, is a two-dimensional, pixel-based puzzle. These games are structured as distinct levels, each designed to teach a specific mini skill to the player (human or AI). To successfully complete a level, the player must demonstrate mastery of that skill by executing planned sequences of actions. How is using video games to test for AGI different from the ways that video games have previously been used to test AI systems? Video games have long been used as benchmarks in AI research, with Atari games being a popular example. But traditional video game benchmarks face several limitations. Popular games have extensive training data publicly available, lack standardized performance evaluation metrics and permit brute-force methods involving billions of simulations. Additionally, the developers building AI agents typically have prior knowledge of these games—unintentionally embedding their own insights into the solutions.

Hyperbridge Expands Polkadot DeFi Access with Uniswap V4 Integration
Hyperbridge Expands Polkadot DeFi Access with Uniswap V4 Integration

Associated Press

time2 days ago

  • Associated Press

Hyperbridge Expands Polkadot DeFi Access with Uniswap V4 Integration

The next phase of the Polkadot DeFi Singularity unlocks DOT utility across major EVM chains [Zurich, Switzerland - July 17, 2025] - Following the successful launch of the Polkadot DeFi Singularity initiative, Hyperbridge is advancing its mission to deepen DOT's presence across leading DeFi ecosystems through a new partnership with Uniswap V4. As Polkadot's native bridge, Hyperbridge has now launched DOT/ETH liquidity incentives on Uniswap V4 using Bunni, a specialized frontend for managing concentrated liquidity. Incentives for the ETH/DOT pool are currently live and will run for three months (until September 21, 2025). These incentives are aimed at rewarding liquidity providers and increasing DOT's utility across the Ethereum, Arbitrum, Base, and BNB ecosystems. As part of the broader DeFi Singularity roadmap, incentives for the vDOT/ETH pool-powered by Bifrost-will follow shortly, with a scheduled launch on July 24, 2025. This expansion will further strengthen DOT's presence on Uniswap V4 by supporting both native and liquid-staked forms of DOT. The partnership supports the broader goals of the DeFi Singularity campaign, a joint initiative by Hyperbridge and Bifrost, which secured 795,000 DOT in funding from the Polkadot Treasury to boost DOT accessibility on EVM-compatible chains. 'This integration with Uniswap V4 is a key step in making DOT a more versatile multichain asset,' said Seun Lanlege, Co-founder of Hyperbridge. 'We're using Uniswap V4's advanced infrastructure to bring DOT into deeper liquidity environments, and we're just getting started.' For full details on active pools, pairings, and APRs, visit the official Hyperbridge blog. For media inquiries, please contact Jonathan Duran at Jonathan(at)Distractive(dot)xyz ### About Hyperbridge Hyperbridge is a cryptoeconomic coprocessor for secure, verifiable interoperability powered by consensus and storage proofs. It acts as the HTTPS of blockchain interoperability, providing developers with onchain and off-chain SDKs for securely sending cross-chain messages (POST requests) and reading on-chain storage (GET requests). About Polkadot Polkadot is the powerful, secure core of Web3, providing a shared foundation that unites some of the world's most transformative apps and blockchains. Polkadot offers advanced modular architecture that allows devs to easily design and build their own specialized blockchain projects, pooled security that ensures the same high standard for secure block production across all connected chains and apps connected to it, and robust governance that ensures a transparent system where everyone has say in shaping the blockchain ecosystem for growth and sustainability. With Polkadot, users are not just participants, they're co-creators with the power to shape its future.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store