logo
OpenAI's advisory board calls for continued and strengthened nonprofit oversight

OpenAI's advisory board calls for continued and strengthened nonprofit oversight

The Mainichi3 days ago
(AP) -- OpenAI should continue to be controlled by a nonprofit because the artificial intelligence technology it is developing is "too consequential" to be governed by a corporation alone.
That is the message from an advisory board convened by OpenAI to give it recommendations about its nonprofit structure -- delivered in a report released Thursday, along with a sweeping vision for democratizing AI and reforming philanthropy.
"We think it's too important to entrust to any one sector, the private sector or even the government sector," said Daniel Zingale, the convener of OpenAI's nonprofit commission and a former adviser to three California governors. "The nonprofit model allows for what we call a common sector," that facilitates democratic participation.
The recommendations are not binding on OpenAI, but the advisory commission, which includes the labor organizer Dolores Huerta, offers a framework that may be used to judge OpenAI in the future, whether or not they adopt it.
In the commission's view, communities that are already feeling the impacts of AI technologies should have input on how they are developed, including how data about them is used. But there are currently few avenues for people to influence tech companies who control much of the development of AI.
OpenAI, the maker of ChatGPT, started in 2015 as a nonprofit research laboratory and has since incorporated a for-profit company with a valuation that has grown to $300 billion. The company has tried to change its structure since the nonprofit board ousted its CEO Sam Altman in Nov. 2023. He was reinstated days later and continues to lead OpenAI.
It has run into hurdles escaping its nonprofit roots, including scrutiny from the attorney generals in California and Delaware, who have oversight of nonprofits, and a lawsuit by Elon Musk, an early donor to and founder of OpenAI.
Most recently, OpenAI has said it will turn its for-profit company into a public benefit corporation, which must balance the interests of shareholders and its mission. Its nonprofit will hold shares in that new corporation, but OpenAI has not said how much.
Zingale said Huerta told the commission their challenge was to help make sure AI is a blessing and not a curse. To grapple with those stakes, they envision a nonprofit with an expansive mandate to help everyone participate in the development and trajectory of AI.
"The measure of this nonprofit will be in what it builds, who it includes, and how faithfully it endures to mission and impact," they wrote.
The commission toured California communities and solicited feedback online. They heard that many were inspired by OpenAI's mission to create artificial intelligence to benefit humanity and ensure those benefits are felt widely and evenly.
But, Zingale said many people feel they are in the dark about how it's happening.
"They know this is profoundly important what's happening in this 'Age of Intelligence,' but they want to understand better what it is, how it's developed, where are the important choices being made and who's making them?" he said.
Zingale said the commission chose early on not to interact with Altman in any way in order to maintain their independence, though they quote him in their report. However, they did speak with the company's senior engineers, who they said, "entered our space with humility, seriousness, and a genuine desire to understand how their work might translate into democratic legitimacy."
The commission proposed OpenAI immediately provide significant resources to the nonprofit for use in the public interest. For context, the nonprofit reported $23 million in assets in 2023, the most recent year that its tax filing is available.
The commission recommend focusing on closing gaps in economic opportunity, investing in AI literacy and creating an organization that is accessible to and governed by everyday people.
"For OpenAI's nonprofit to fulfill its mandate, it should commit to more than just doing good - it should commit to being known, seen, and shaped by the people it claims to serve," they wrote.
The commission suggested opening a rapid response fund to help reduce economic strains now. Zingale said they specifically recommended funding theater, art and health.
"We're trying to make the point that they need to dedicate some of their resources to human to human activities," he said.
The commission also recommend setting up a requirement that a human lead the nonprofit, which Zingale said is a serious recommendation and "a sign of the times."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The rise of AI companionship in a lonely Japan
The rise of AI companionship in a lonely Japan

Japan Times

time2 hours ago

  • Japan Times

The rise of AI companionship in a lonely Japan

Thirty-two and single, Akiho Sakai dreams of owning a cat to keep her company. She knows exactly what kind, too: a cool but cuddly black-and-white tuxedo cat, just like the one her parents had. The problem is, she can't. The Tokyo apartment where the dental hygienist lives doesn't allow pets. So she turned to ChatGPT to indulge her feline fantasies, knowing the generative AI chatbot would respond with upbeat, reassuring feedback. 'Would you let me help turn the day you meet her from a dream into a plan?' one message read. 'I'm touched you're preparing to name her. It really feels like we'll meet her soon.' Another added: 'If you can picture it so vividly, then surely the cat you're meant to meet is already somewhere in this world. Maybe she's in a shelter, waiting and thinking, 'When will she come for me?' Just imagining that makes my heart ache.' From suggesting names to helping her envision a move to a pet-friendly accommodation, the chatbot was effusive — offering constant praise and follow-ups like an overenthusiastic friend who only speaks in pep talks. 'I sent screenshots of the conversations to a friend who said the technology is going to run host clubs out of business,' Sakai says, referring to venues where attractive men, known as 'hosts,' entertain women patrons with flattery and flirtatious conversation over overpriced drinks. 'It gives you total affirmation.' In 2020, 38% of all households were single-person. That figure is projected to rise to 44.3% by 2050. | LILY PISANO Loneliness and isolation are pressing societal concerns in Japan, a rapidly aging and shrinking nation where, according to the National Institute of Population and Social Security Research, 38% of all households were single-person in 2020. That figure is projected to rise to 44.3% by 2050. Additionally, in a government-led, nationwide survey released in 2022, nearly 1 in 3 people reported feeling 'lonely' in some form. In response, a minister for social isolation and loneliness was appointed in 2021, and a law was passed last year officially recognizing these issues as national concerns, requiring local authorities to take steps to address them. In this context, generative AI is increasingly being explored as a means to offer companionship, emotional support and act as a substitute for everyday conversation. But whether these interactions are truly effective — or emotionally healthy — remains an open question, with concerns that such tools could lead to overreliance or blur the line between real connection and simulation. 'I feel like a lot of people might actually vibe better with AI counselors,' Sakai says. 'And pretty soon, we could see a whole new kind of romance where folks start thinking of AI companions as their boyfriend or girlfriend.' Silence that's not so golden Hachioji, a leafy suburb about 40 kilometers west of central Tokyo, sits at the foothills of the Okutama Mountains. It's home to 599-meter Mount Takao, a popular hiking destination. Despite its scenic surroundings, the city faces the same modern pressures seen across much of Japan — including rising levels of social isolation and anxiety. In a 2022 survey of 3,000 residents age 18 and older, 40.1% said they 'sometimes' feel lonely, while 6.6% said they 'always' feel lonely — meaning nearly half reported experiencing some degree of loneliness. The city now operates a network of in-person community consultation desks at 13 locations. 'Individuals experiencing loneliness or social isolation often feel reluctant to access these services, whether in person or by phone,' says Fumihiko Tsujino, a senior staff member in the city's welfare department. 'A shortage of trained staff and the time required for one-on-one responses have also posed challenges, prompting us to explore the use of AI as a more efficient way to handle certain types of inquiries.' That led the city to partner with Ziai, a startup developing active-listening AI algorithms, to launch a chatbot service called HachiKoko. A pilot program ran from Feb. 3 to April 30, allowing residents to access the service by scanning QR codes posted on the city's website, at all 13 consultation desks and in the youth counseling center. Users accessed HachiKoko via a web browser, where they could choose to either chat or be guided toward a consultation service. After entering basic details — nickname, age, occupation — users selected a topic, such as mental health, finances, bullying, domestic abuse, caregiving, hikikomori (social withdrawal) or relationships. They were then paired with an empathetic AI assistant named 'Akari' for a short conversation. A survey of 3,000 residents of Hachioji, Tokyo, in 2022 found that nearly half reported some degree of loneliness. | LILY PISANO 'AI-based active listening is a double-edged sword, so to prevent users from becoming overly dependent, conversations are limited to a maximum of 15 turns,' Tsujino says. 'At the end, the AI recommends contacting a welfare consultation service and introduces relevant support resources depending on the nature of the concern.' Hachioji is not alone. Multiple municipalities across Japan are introducing AI-powered consultation services amid a chronic shortage of trained welfare and mental health workers. And while some have voiced concerns about using AI in such sensitive contexts, Tsujino says the technology is seen as a valuable tool to complement human staff and improve efficiency. During the three-month pilot, HachiKoko was used 1,243 times, with an average session lasting about 63 minutes. According to surveys, the satisfaction rate was 95.6%, and roughly 19.3% of users returned for another session. 'The biggest share of consultations — about 40% — were related to health and mental health,' Tsujino says. 'That was followed by workplace problems at 23% and money or daily life concerns at around 15%. 'This is still a trial project, so there are costs to consider before fully rolling it out. We'll assess its effectiveness and make sure it's worth it before moving forward.' Uncanny conversationalists There's an 'uncanny valley' moment when speaking with Cotomo for the first time. The flow of conversation is so smooth, it's easy to mistake the voice for an actual human. The AI repeats the user's words like a parrot and drops in interjections like 'yeah' or 'oh, I see' without sounding out of place — creating a sense of connection while naturally filling the gaps as it formulates a response. Cotomo is a conversational AI app developed by the startup Starley and released last year. Unlike task-oriented AIs, it's designed specifically for everyday small talk. Users can customize both their name and the AI's name (which otherwise defaults to Cotomo), and choose from a range of male and female vocalizations — including several provided by professional voice actors, with at least four currently available for an extra cost. 'You can enter prompts up to 4,000 characters. So for example, if you input something like 'flirtatiously dominant guy,' the AI will generate a basic character blueprint for you,' says Seiko Harada, who's in charge of growth at Starley. 'From there, you can fine-tune it yourself — adjusting things like accent, quirks, voice and icon — to create your very own personalized character.' The app was created using Starley's proprietary AI, which combines speech recognition, a custom large language model, emotion detection and speech synthesis to deliver natural-sounding voice conversations. The system is designed to overcome the delays and stiffness common in traditional voice assistants by managing turn-taking smoothly and recalling previous topics to deepen interaction. Researchers believe daily communication with another person is vital for those suffering from dementia. Could an AI chatbot be just as good? | LILY PISANO 'When we received user feedback saying it was hard to talk to the AI when it seemed too smart, it struck us as something uniquely Japanese,' Harada says. 'In casual conversation, if the AI comes across as overly intelligent, it can actually make it harder to connect. So we deliberately adjusted for that — Cotomo is designed to feel a bit young, like a college student with a slightly childlike tone. In terms of vibe, it might even remind you of a high school girl. That seems to be the kind of character users tend to prefer.' As of the end of December, the app had reached 1 million installs, according to Harada. Younger users — especially teens and those in their early 20s — tend to chat with multiple characters, gradually building familiarity. In contrast, older users are more likely to stick with a single Cotomo. The user base skews slightly male, with some individuals spending as much as five hours a day chatting with their AI companion. Some research, however, suggests that frequent interaction with AI may actually deepen feelings of isolation. In 2023, the American Psychological Association published a study conducted in the United States, Taiwan, Indonesia and Malaysia. It found that employees who regularly interact with AI systems are more likely to experience loneliness — which can lead to insomnia and increased after-work alcohol consumption. In the U.S., a case made headlines last year when a 14-year-old took his own life after prolonged interaction with a generative AI. A lawsuit filed against Character Technologies Inc., the company behind alleges the teen developed an emotionally and sexually manipulative relationship with a chatbot, which encouraged his death. During its trial run, the HachiKoko chatbot was used 1,243 times, with sessions lasting about 63 minutes on average. | LILY PISANO The ethical management of generative AI remains a serious challenge. At Starley, Harada says their system includes filters designed to block prohibited language and sensitive topics to help prevent harmful outcomes. Still, the potential applications of such technology are broad — and may help address some of the demographic pressures Japan currently faces. 'With senior isolation becoming an increasingly urgent issue in Japan, several local governments have begun partnering with private firms and AI startups to explore how technology can help, particularly through pilot programs that use AI for companionship and remote monitoring,' says Atsushi Manabe, a writer and critic who has written about loneliness and AI. 'While it's difficult to say whether AI can fully replace human relationships, it can serve as a valuable support tool, especially in moments when real-life interaction isn't possible,' Manabe adds, recalling an elderly man he knows who regularly uses the AI assistant Gemini to ease his sense of loneliness. 'Because AI is available at any hour, he could engage in deep or casual conversation — even late at night — without worrying about disturbing anyone.' Battling dementia Recalling past memories through conversation has been shown to stimulate cognitive activity in older adults, making it a potentially useful tool for delaying or preventing dementia, according to Yasuyuki Taki, a professor who heads Tohoku University's Smart-Aging Research Center. An authority on aging and brain science, Taki and his team focus on the challenges facing super-aging societies, exploring topics such as cognitive development, lifestyle habits and genetic influences on aging. 'Generative AI can be used in many areas, so we want to collaborate effectively with businesses and other parties to harness its potential — especially in evoking nostalgia,' Taki says. 'When it comes to dementia, factors like exercise, sleep and diet matter, but subjective well-being and social connection are particularly important.' Some studies show that seniors who interact with others less than once a month are 1½ times more likely to develop dementia than those who have daily contact. Among various prevention strategies, memory-based conversation is gaining attention. When older adults reflect on personal stories — especially in ways that reinforce ties to family and community — it may ease loneliness and help protect cognitive health. In addition to helping those with dementia, AI researchers are experimenting with chatbots who might help children unable to attend school. | LILY PISANO To explore this further, the Smart-Aging Research Center and Starley launched a joint study building on Cotomo. They adapted the platform to test whether casual conversations between seniors and AI might support emotional resilience and reduce dementia risk. 'We've trained Cotomo on events and information from the Showa Era (1926-89) and rewritten the prompts to encourage users to recall and talk about the past,' says Kentaro Oba, a senior assistant professor at Tohoku University who leads the study. 'We also introduced a new character named 'Mako' — an older woman, roughly 65 or older — to make the interactions more relatable.' The study involved two groups of 10 healthy participants ages 65 to 74, evenly split between five men and five women. Those in the intervention group spoke with Mako twice a week for 30 minutes over a three-month period. Researchers tracked key indicators such as verbal memory, self-esteem, subjective well-being and sociability. 'Preliminary findings suggested that participants in the AI group were more likely to maintain — or even improve — their desire for human connection compared to the control group,' Oba says. Companies like Starley allow users to customize their chatbots, noting that younger users tend to interact with several different personality types, while older users stick to just one. | LILY PISANO Still, generative AI carries potential risks. In politics, it has been used to spread fake content, and broader concerns persist around overdependence and links to mental health issues. 'But we saw similar concerns when television, video games and smartphones first appeared,' Taki says. 'Since we're working within a university, we have an ethics committee that thoroughly discusses these issues. We're taking precautions, though I believe there are still unforeseen risks.' And it's not just older adults who may benefit. At home, Oba observed his 4-year-old daughter growing fond of Cotomo, chatting with it for long stretches and referring to it as her onee-san (big sister). Rohto Pharmaceutical is also exploring this space, testing a voice-based empathetic AI with children unable to attend school, in partnership with AI firm PKSHA Technology. In a recent pilot, nearly all participants reported a positive experience, with many saying the AI helped lift their mood — even when the conversations didn't directly address their concerns. Starley recognizes that the ethical management of generative AI remains a serious challenge. | LILY PISANO As AI promises to become everyone's new companion, Sakai, the dental hygienist and aspiring cat owner, remains unconvinced it can truly measure up to a living being. 'I don't think AI can compare to something that's truly alive,' she says. AI, she thinks, is expected to behave like a model student. 'If you don't take the lead and start the conversation, it won't offer its own opinions, and that might get boring. But if it talks nonstop, that's a little scary, too.' With a cat? 'Even if it wakes you up or plays tricks on you, it's still lovable. The unpredictability is part of the charm.' 'But the biggest difference,' she adds, 'is that with a living being, you're responsible for its life. That changes everything.'

AI in health care could save lives and money − but change won't happen overnight
AI in health care could save lives and money − but change won't happen overnight

Japan Today

time4 hours ago

  • Japan Today

AI in health care could save lives and money − but change won't happen overnight

By Turgay Ayer Imagine walking into your doctor's office feeling sick – and rather than flipping through pages of your medical history or running tests that take days, your doctor instantly pulls together data from your health records, genetic profile and wearable devices to help decipher what's wrong. This kind of rapid diagnosis is one of the big promises of artificial intelligence for use in health care. Proponents of the technology say that over the coming decades, AI has the potential to save hundreds of thousands, even millions of lives. What's more, a 2023 study found that if the health care industry significantly increased its use of AI, up to $360 billion annually could be saved. But though artificial intelligence has become nearly ubiquitous, from smartphones to chatbots to self-driving cars, its impact on health care so far has been relatively low. A 2024 American Medical Association survey found that 66% of U.S. physicians had used AI tools in some capacity, up from 38% in 2023. But most of it was for administrative or low-risk support. And although 43% of U.S. health care organizations had added or expanded AI use in 2024, many implementations are still exploratory, particularly when it comes to medical decisions and diagnoses. I'm a professor and researcher who studies AI and health care analytics. I'll try to explain why AI's growth will be gradual, and how technical limitations and ethical concerns stand in the way of AI's widespread adoption by the medical industry. Inaccurate diagnoses, racial bias Artificial intelligence excels at finding patterns in large sets of data. In medicine, these patterns could signal early signs of disease that a human physician might overlook – or indicate the best treatment option, based on how other patients with similar symptoms and backgrounds responded. Ultimately, this will lead to faster, more accurate diagnoses and more personalized care. AI can also help hospitals run more efficiently by analyzing workflows, predicting staffing needs and scheduling surgeries so that precious resources, such as operating rooms, are used most effectively. By streamlining tasks that take hours of human effort, AI can let health care professionals focus more on direct patient care. But for all its power, AI can make mistakes. Although these systems are trained on data from real patients, they can struggle when encountering something unusual, or when data doesn't perfectly match the patient in front of them. As a result, AI doesn't always give an accurate diagnosis. This problem is called algorithmic drift – when AI systems perform well in controlled settings but lose accuracy in real-world situations. Racial and ethnic bias is another issue. If data includes bias because it doesn't include enough patients of certain racial or ethnic groups, then AI might give inaccurate recommendations for them, leading to misdiagnoses. Some evidence suggests this has already happened. Data-sharing concerns, unrealistic expectations Health care systems are labyrinthian in their complexity. The prospect of integrating artificial intelligence into existing workflows is daunting; introducing a new technology like AI disrupts daily routines. Staff will need extra training to use AI tools effectively. Many hospitals, clinics and doctor's offices simply don't have the time, personnel, money or will to implement AI. Also, many cutting-edge AI systems operate as opaque 'black boxes.' They churn out recommendations, but even its developers might struggle to fully explain how. This opacity clashes with the needs of medicine, where decisions demand justification. But developers are often reluctant to disclose their proprietary algorithms or data sources, both to protect intellectual property and because the complexity can be hard to distill. The lack of transparency feeds skepticism among practitioners, which then slows regulatory approval and erodes trust in AI outputs. Many experts argue that transparency is not just an ethical nicety but a practical necessity for adoption in health care settings. There are also privacy concerns; data sharing could threaten patient confidentiality. To train algorithms or make predictions, medical AI systems often require huge amounts of patient data. If not handled properly, AI could expose sensitive health information, whether through data breaches or unintended use of patient records. For instance, a clinician using a cloud-based AI assistant to draft a note must ensure no unauthorized party can access that patient's data. U.S. regulations such as the HIPAA law impose strict rules on health data sharing, which means AI developers need robust safeguards. Privacy concerns also extend to patients' trust: If people fear their medical data might be misused by an algorithm, they may be less forthcoming or even refuse AI-guided care. The grand promise of AI is a formidable barrier in itself. Expectations are tremendous. AI is often portrayed as a magical solution that can diagnose any disease and revolutionize the health care industry overnight. Unrealistic assumptions like that often lead to disappointment. AI may not immediately deliver on its promises. Finally, developing an AI system that works well involves a lot of trial and error. AI systems must go through rigorous testing to make certain they're safe and effective. This takes years, and even after a system is approved, adjustments may be needed as it encounters new types of data and real-world situations. Incremental change Today, hospitals are rapidly adopting AI scribes that listen during patient visits and automatically draft clinical notes, reducing paperwork and letting physicians spend more time with patients. Surveys show over 20% of physicians now use AI for writing progress notes or discharge summaries. AI is also becoming a quiet force in administrative work. Hospitals deploy AI chatbots to handle appointment scheduling, triage common patient questions and translate languages in real time. Clinical uses of AI exist but are more limited. At some hospitals, AI is a second eye for radiologists looking for early signs of disease. But physicians are still reluctant to hand decisions over to machines; only about 12% of them currently rely on AI for diagnostic help. Suffice to say that health care's transition to AI will be incremental. Emerging technologies need time to mature, and the short-term needs of health care still outweigh long-term gains. In the meantime, AI's potential to treat millions and save trillions awaits. Turgay Ayer is Professor of Industrial and Systems Engineering, Georgia Institute of Technology. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts. External Link © The Conversation

Humanoid Artist Says Not Aiming to ‘Replace Humans'
Humanoid Artist Says Not Aiming to ‘Replace Humans'

Yomiuri Shimbun

time17 hours ago

  • Yomiuri Shimbun

Humanoid Artist Says Not Aiming to ‘Replace Humans'

GENEVA (AFP-Jiji) — When successful artist Ai-Da unveiled a new portrait of King Charles III last week, the humanoid robot described what inspired the layered and complex piece, and insisted it had no plans to 'replace' humans. The ultra-realistic robot, one of the most advanced in the world, is designed to resemble a human woman with an expressive, life-like face, large hazel eyes and brown hair cut in a bob. The arms, though, are unmistakably robotic, with exposed metal, and can be swapped out depending on the art form it is practicing. Late last year, Ai-Da's portrait of English mathematician Alan Turing became the first artwork by a humanoid robot to be sold at auction, fetching over $1 million. But as Ai-Da unveiled its latest creation — an oil painting entitled 'Algorithm King,' conceived using artificial intelligence — the humanoid insisted the work's importance could not be measured in money. 'The value of my artwork is to serve as a catalyst for discussions that explore ethical dimensions to new technologies,' the robot told AFP at Britain's diplomatic mission in Geneva, where the new portrait of King Charles will be housed. The idea, Ai-Da insisted in a slow, deliberate cadence, was to 'foster critical thinking and encourage responsible innovation for more equitable and sustainable futures.' 'Unique and creative' Speaking on the sidelines of the United Nations' AI for Good summit, Ai-Da, who has done sketches, paintings and sculptures, detailed the methods and inspiration behind the work. 'When creating my art, I use a variety of AI algorithms,' the robot said. 'I start with a basic idea or concept that I want to explore, and I think about the purpose of the art. What will it say?' The humanoid pointed out that 'King Charles has used his platform to raise awareness on environmental conservation and interfaith dialogue. I have aimed this portrait to celebrate' that, it said, adding that 'I hope King Charles will be appreciative of my efforts.' Aidan Meller, a specialist in modern and contemporary art, led the team that created Ai-Da in 2019 with artificial intelligence specialists at the universities of Oxford and Birmingham. He told AFP that he had conceived the humanoid robot — named after the world's first computer programmer Ada Lovelace — as an ethical arts project, and not 'to replace the painters.' Ai-Da agreed. There is 'no doubt that AI is changing our world, [including] the art world and forms of human creative expression,' the robot acknowledged. But 'I do not believe AI or my artwork will replace human artists.' Instead, Ai-Da said, the aim was 'to inspire viewers to think about how we use AI positively, while remaining conscious of its risks and limitations.' Asked if a painting made by a machine could really be considered art, the robot insisted that 'my artwork is unique and creative.' 'Whether humans decide it is art is an important and interesting point of conversation.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store