logo
#

Latest news with #Berkeley

Berkeley City Council approves zoning change to encourage "middle housing"
Berkeley City Council approves zoning change to encourage "middle housing"

CBS News

time13 hours ago

  • Business
  • CBS News

Berkeley City Council approves zoning change to encourage "middle housing"

In a unanimous vote Thursday night, the Berkeley City Council approved a zoning change designed to make it easier to build small apartment buildings across much of the city—part of a broader effort to address the region's ongoing housing affordability crisis. Dubbed the Middle Housing Ordinance, the new policy streamlines the permitting process for residential buildings such as duplexes, triplexes, and three-story multi-family homes. City officials and housing advocates said the change could increase housing options for middle-income residents who have been increasingly priced out of the market. "These types of units will get a streamlined, 'by-right' approval," said District 1 Councilmember Rashi Kesarwani, who championed the policy. "So if [a project meets] the development standards, they don't go above three stories, and they have setbacks and open space, they can get approved over the counter." The new zoning rules, however, will not apply to the Berkeley Hills, where fire-risk concerns have limited development. Supporters of the ordinance argued that increasing housing supply is essential for reducing costs and giving younger and middle-class residents a foothold in the city's expensive real estate market. "I could not afford one of these houses," said Andrea Horbinski, a renter in the Berkeley Hills with a Ph.D. "And I don't think that is going to change. So hopefully, [developers will] build different housing, new housing, smaller size that I could afford." Horbinski was one of the residents who spoke in favor of the ordinance at the council meeting. "The more the merrier," she said. "The more housing there is, the more prices will come down, the more things will be more affordable for more people." Berkeley's real estate market remains one of the priciest in the region. According to the Bay East Association of Realtors, of the 66 single-family homes sold last month in Berkeley, the median price was $1,812,500. "With what we're projecting, in terms of 50 to 100 homes per year, that's an incremental change," Kesarwani said. "So it'll give us the opportunity to adapt and adjust." Still, not everyone is on board. Some residents fear the zoning change could alter the character of Berkeley's neighborhoods and put added strain on infrastructure. "Why Berkeley, which is already so densely populated and already so hard to get around in?" asked longtime resident Clifford Fred. "It just doesn't make any sense to me." Fred said he's concerned about traffic congestion and limited parking. "Older people who don't have driveways, people need their parking spaces," he said. "I don't think it's selfish for people to have parking spaces near their home." Opponents also criticized the ordinance for not including specific requirements for affordable housing units. But supporters like Horbinski remain hopeful that smaller, lower-cost units will eventually make ownership more attainable. "I think eventually I'll be able to get to a place where I could have a condo or a unit in one of these sort of missing-middle type places," she said. The zoning change is expected to take effect in November. Berkeley follows in the footsteps of Sacramento, which passed a similar measure last year. Santa Rosa is reportedly considering a comparable proposal.

Exclusive Interview: Jenny Chatman Charts A New Course For Berkeley Haas
Exclusive Interview: Jenny Chatman Charts A New Course For Berkeley Haas

Yahoo

time16 hours ago

  • Business
  • Yahoo

Exclusive Interview: Jenny Chatman Charts A New Course For Berkeley Haas

UC Berkeley Haas School of Business Dean Jenny Chatman: 'How can we help students job craft, advocate for roles that don't exist yet, and help employers understand what Haas graduates bring? That's where I'll be spending more time and effort' Jenny Chatman knows UC Berkeley's Haas School of Business from every angle — as a student, professor, culture-builder, and interim dean. On July 1, she officially takes the helm as the school's 16th dean, armed with a goal to bring structure, visibility, and strategic focus to a business school she calls 'a hidden treasure.' 'My goal is to unhide the treasure,' Chatman tells Poets&Quants in an exclusive interview. 'Haas has had a huge amount of expertise and a wide range of opportunities, but not a structure that assembles the assets in a navigable way for students. That's going to be a lot of what I'm doing.' From advancing AI offerings to strengthening student outcomes and expanding programs like the Flex MBA and Master of Financial Engineering program, Chatman is putting her cultural leadership theory into practice. As she prepares to officially assume the deanship, her agenda reflects both her research and her experience: lead with clarity, empower through collaboration, and scale with purpose. Chatman's relationship with Haas runs deep. She earned her BA in psychology from Berkeley in 1981 and a Ph.D. from the business school in 1988. She returned to join the faculty in 1993, eventually becoming the Paul J. Cortese Distinguished Professor of Management and one of the world's foremost experts on workplace culture and leadership. Her appointment as dean was announced June 16 by UC Berkeley Executive Vice Chancellor Ben Hermalin and Chancellor Rich Lyons, himself a former Haas dean. Lyons called her 'the right leader' for a rapidly evolving educational landscape — someone who understands both innovation and institutional integrity. That mix is reflected in Chatman's approach to the deanship. She plans to conduct a listening tour to inform a more precise school-wide strategy while also accelerating initiatives in four key areas: sustainability, AI, healthcare, and entrepreneurship. 'I want to make sure I really understand where we are in every part of the school,' Chatman says. 'But I also feel ready to hit the ground running.' Jennifer Chatman welcoming the crowd to the 2025 MBA Commencement at the Greek Theatre. The Haas School of Business removed the 'interim' tag from Chatman's deanship on Monday (June 16). Photo by Brittany Hosea-Small AI will, of course, be an area of significant focus. Haas already offers nearly 40 courses with AI content. Under Chatman, the school is fast-tracking approval of a formal AI certificate, with plans to launch a concentration for full-time MBA students by fall 2025 and later expand the offering to Evening & Weekend MBA students. Students could list the credential on their resumes before graduation. The effort is supported by a deep faculty bench working across marketing, healthcare, innovation, and more. Among them: Zsolt Katona, an early adopter of AI tools in marketing; Jonathan Kolstad, whose Center for Healthcare Marketplace Innovation explores AI's medical applications; and Toby Stuart, who leads the school's Entrepreneurship & Innovation faculty group and Silicon Valley immersion programs. 'We want our students to shine, and we want our brand to reflect the incredible work happening here,' Chatman says. 'We're not just teaching AI — we're helping students understand where it adds value and where it falls short.' She'll take the same approach to sustainability, healthcare, and entrepreneurship: aligning curriculum, research, and extracurricular resources in a more integrated way to help students navigate opportunities and develop market-ready skills. A longtime advocate of student-centered leadership, Chatman has already led significant investments in the student experience — especially in the full-time and Evening & Weekend MBA programs. 'We looked at our MBA program three years ago and asked: What needs to change?' she says. 'We've been working on every juncture to make the experience more timely and more relevant.' Career outcomes are a top priority — especially as students pursue less traditional paths. From climate leadership to startup ventures, Haas graduates are forging new roles in evolving markets. Chatman says career services must adapt accordingly. 'How can we help students job craft, advocate for roles that don't exist yet, and help employers understand what Haas graduates bring?' she asks. 'That's where I'll be spending more time and effort.' She's also led the launch of the school's dual MBA/Master of Climate Solutions program and helped Haas graduate its first Flex MBA cohort this spring — a program already in high demand and targeted for expansion, especially in underrepresented regions like Asia. 'This is a daunting job. What comforts me is the incredible people around me — people who are smart, expert, and deeply committed to our public mission' Chatman is equally enthusiastic about the Spieker Undergraduate Business Program, which transitioned Haas from a two-year to a four-year undergraduate model. By 2027, it will double in size to more than 1,100 students. 'These students are like a shot in the arm of sheer goodness,' she says. 'They're getting summer internships, thriving in class, and engaging in rich, rigorous learning experiences.' Still, with an ultra-competitive 4% acceptance rate, Chatman is working to ensure that top talent isn't lost due to space constraints. She's in conversations with campus leadership to expand access and visibility. Another of Chatman's early contributions is the creation of a Strategy and Growth Committee within the Haas Advisory Board — a rotating, high-impact group of members who meet more frequently to help refine major initiatives. 'They help uncover weaknesses in ideas and make them bulletproof,' she says. 'If they said thumbs down, I would take that very seriously.' Among the first initiatives: Haas Ventures, a fund still in the planning stages that will back startups founded by Haas and Berkeley-affiliated entrepreneurs. She's also asked Berkeley Executive Education CEO Mike Rielly to double its annual revenue from $40 million to $80 million in five years — a target she says the board is helping pursue. Until now, Chatman was perhaps best known at Haas as co-creator of the school's Defining Leadership Principles, or DLPs: Question the Status Quo, Confidence Without Attitude, Students Always, and Beyond Yourself. She calls them the 'glue' of Haas culture — and under her deanship, they're back in full force. 'There are over 180 processes tied to the DLPs,' she says. 'We're leaning back in, because they're distinctive for the school and incredibly useful for our students.' As interim dean, Chatman visited every first-year class to share the DLPs' history and invite students to define what they mean to them personally. They're used in admissions, faculty evaluations, classroom decision-making, and alumni engagement — and Chatman wants them even more deeply embedded in the coming years. Asked to reflect on her leadership style, Chatman returns to her research. Narcissistic leadership, she says, threatens organizations by isolating decision-making and failing to bring others along. If the school is an orchestra, she sees herself as a conductor — not a soloist — and credits her management team, faculty, students, and alumni with helping shape every major decision. 'This is a daunting job,' she says. 'What comforts me is the incredible people around me — people who are smart, expert, and deeply committed to our public mission.' That mission is also at the heart of her message to future applicants. 'If you want to help define what's next — and you want to do it in a collaborative, ethical way — then Berkeley Haas is the place for you,' Chatman says. 'This is the human edge of innovation. And it's what makes us different.' DON'T MISS and The post Exclusive Interview: Jenny Chatman Charts A New Course For Berkeley Haas appeared first on Poets&Quants. Sign in to access your portfolio

Why Reliability Is The Hardest Problem In Physical AI
Why Reliability Is The Hardest Problem In Physical AI

Forbes

timea day ago

  • Automotive
  • Forbes

Why Reliability Is The Hardest Problem In Physical AI

Dr. Jeff Mahler: Co-Founder, Chief Technology Officer, Ambi Robotics; PhD in AI and Robotics from UC Berkeley. getty Imagine your morning commute. You exit the highway and tap the brakes, but nothing happens. The car won't slow down. You frantically search for a safe place to coast, heart pounding, hoping to avoid a crash. Even after the brakes are repaired, would you trust that car again? Trust, once broken, is hard to regain. When it comes to physical products like cars, appliances or robots, reliability is everything. It's how we come to count on them for our jobs, well-being or lives. As with vehicles, reliability is critical to the success of AI-driven robots, from the supply chain to factories to our homes. While the stakes may not always be life-or-death, dependability still shapes how we trust robots, from delivering packages before the holidays to cleaning the house just in time for a dinner party. Yet despite the massive potential of AI in the physical world, reliability remains a grand challenge for the field. Three key factors make this particularly hard and point to where solutions might emerge. 1. Not all failures are equal. Digital AI products like ChatGPT make frequent mistakes, yet hundreds of millions of active users use them. The key difference is that these mistakes are usually of low consequence. Coding assistants might suggest a software API that doesn't exist, but this error will likely be caught early in testing. Such errors are annoying but permissible. In contrast, if a robot AI makes a mistake, it can cause irreversible damage. The consequences range from breaking a beloved item at home to causing serious injuries. In principle, physical AI could learn to avoid critical failures with sufficient training data. In practice, however, these failures can be extremely rare and may need to occur many times before AI learns to avoid them. Today, we still don't know what it takes in terms of data, algorithms or computation to achieve high dependability with end-to-end robot foundation models. We have yet to see 99.9% reliability on a single task, let alone many. Nonetheless, we can estimate that the data scale needed for reliable physical AI is immense because AI scaling laws show a diminishing performance with increased training data. The scale is likely orders of magnitude higher than for digital AI, which is already trained on internet-scale data. The robot data gap is vast, and fundamentally new approaches may be needed to achieve industrial-grade reliability and avoid critical failures. 2. Failures can be hard to diagnose. Another big difference between digital and physical AI is the ability to see how a failure occurred. When a chatbot makes a mistake, the correct answer can be provided directly. For robots, however, it can be difficult to observe the root causes of issues in the first place. Limitations of hardware are one problem. A robot without body-wide tactile sensing may be unable to detect a slippery surface before dropping an item or unable to stop when backing into something behind it. The same can happen in the case of occlusions and missing data. If a robot can't sense the source of the error, it must compensate for these limitations—and all of this requires more data. Long-time delays present another challenge. Picture a robot that sorts a package to the wrong location, sending it to the wrong van for delivery. The driver realizes the mistake when they see one item left behind at the end of the day. Now, the entire package history may need to be searched to find the source of the mistake. This might be possible in a warehouse, but in the home, the cause of failure may not be identified until the mistake happens many times. To mitigate these issues, monitoring systems are hugely important. Sensors that can record the robot's actions, associate them with events and find anomalies can make it easier to determine the root cause of failure and make updates to the hardware, software or AI on the robot. Observability is critical. The better that machines get at seeing the root cause of failure, the more reliable they will become. 3. There's no fallback plan. For digital AI, the internet isn't just training data; it's also a knowledge base. When a chatbot realizes it doesn't know the answer to something, it can search through other data sources and summarize them. Entire products like Perplexity are based on this idea. For physical AI, there's not always a ground truth to reference when planning actions in real-world scenarios like folding laundry. If a robot can't find the sheet corners, it's not likely to have success by falling back to classical computer vision. This is why many practical AI robots use human intervention, either remote or in-person. For example, when a Waymo autonomous vehicle encounters an unfamiliar situation on the road, it can ask a human operator for additional information to understand its environment. However, it's not as clear how to intervene in every application. When possible, a powerful solution is to use a hybrid AI robot planning system. The AI can be tightly scoped to specific decisions such as where to grasp an item, and traditional methods can be used to plan a path to reach that point. As noted above, this is limited and won't work in cases where there is no traditional method to solve the problem. Intervention and fallback systems are key to ensuring reliability with commercial robots today and in the foreseeable future. Conclusion Despite rapid advances in digital GenAI, there's no obvious path to highly reliable physical AI. It isn't just a technical hurdle; it's the foundation for trust in intelligent machines. Solving it will require new approaches to data gathering, architectures for monitoring/interventions and systems thinking. As capabilities grow, however, so does momentum. The path is difficult, but the destination is worth it. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Plantaform's Smart Indoor Fogponics Garden System Is Innovative, But Is It Safe?
Plantaform's Smart Indoor Fogponics Garden System Is Innovative, But Is It Safe?

WIRED

timea day ago

  • WIRED

Plantaform's Smart Indoor Fogponics Garden System Is Innovative, But Is It Safe?

It was about a week into my journey as a hydroponic lettuce farmer when I noticed my Mila air purifier, set to auto mode, was running at full blast. Its internal air quality sensor told me the air was dirty. Not sure if the sensor was overly sensitive, I swapped it out for the more powerful and far quieter IQ Air Atem X (9/10 WIRED Recommends) and set it on auto mode. Next time I went into my son's room, the Atem was running at its highest speed. I checked the room's IQAir Visual Pro Indoor Air Quality Monitor and noticed it was reading a higher-than-usual PM 2.5. For context, my sons' room's AQI (Air Quality Index) is usually in the teens or below, and that's with my air purifiers running at their lowest, almost inaudible, setting. It was at this point that I moved several other air quality monitors into my sons' room. I noticed that there was a smaller uptick in PM 2.5 in other areas of my apartment as well. I took screenshots of my air monitor's dashboard graphs and noticed that when the Plantaform's grow lights were turned off from 10 pm to 8 am, the air quality improved. This happened every night, and I could see it on the various graphs from my consumer air quality monitors. I don't pretend to be Berkeley Lab, but I've been covering air quality long enough to watch for patterns. I knew that Plantaform was using its fogponics system. I had seen Plantaform's co-founder and CEO, Alberto Aguilar, claiming, 'NASA tech is going to revolutionize your kitchens … using NASA's fogponic technology… ' on Canada's equivalent of Shark Tank , known as Dragons' Den . I was naive and hadn't considered how a hydroponics system could impact my indoor air. I began to wonder if the monitors were picking up increased moisture. I've tested humidifiers in the past that impacted indoor air quality—maybe it was moisture. I reached out to Plantaform to ask how their system impacts indoor air quality. They emailed back to say it might be because my air quality monitor was too close to the Plantaform, or the fogponics was impacting the humidity. Most of my indoor air quality monitors also measure humidity, and they were showing fairly constant levels of humidity. It did not mirror what was happening with the PM 2.5. The company also confirmed that the foggers continue to run at a reduced level during the night cycle. Courtesy of Lisa Wood Shapiro Courtesy of Lisa Wood Shapiro Courtesy of Lisa Wood Shapiro I still couldn't control my indoor air quality. And if I unplugged my IQAir Atem air purifier, the indoor air quality shot up above 150. For context, the NCAA will consider rescheduling events if the air quality index rises above 200. It was around this time that I remembered the included powder plant nutrients I scooped into the water. Plantaform's own container lists it as fertilizer in the fine print. Their own warning label says, 'If inhaled, move person to fresh air. If inhalation occurs or persists, get medical attention.' I looked up each ingredient. Soluble potash, boron, and fer are a few that can lead to health issues when inhaled. I began to look at my growing lettuce with worry. I opened the window in the bedroom and was glad my boys were off at college. During the past two weeks of growing, I had developed my annual chest cold and asthma, though the hypochondriac in me worried that my giant egg growing lettuce might not be helping. I'm sure I sounded crazy when I mentioned the hydroponic lettuce egg to my primary care provider, who assured me that it seemed like my usual upper respiratory infection. Health Concerns I shut the boys' bedroom door and found myself looking into some of the health risks of those exposed to indoor hydroponic growing systems, like childhood hypersensitivity pneumonitis (HP) or extrinsic allergic alveolitis—"farmer's lung," or 'pigeon breeder's lung.' In reading the above linked study about a 14-year-old girl who developed HP due to an indoor hydroponic system, it was noted that they tested a water sample from the hydroponic system and found that Aureobasidium pullulans was the dominant fungal microorganism. When I emailed Plantaform and asked if there was a water filter in the tank, they wrote back, 'Plantaform does not include a built-in water filter, so the device uses the exact water you pour in to generate the fog that nourishes your plants.'

AI Agents Are Getting Better at Writing Code—and Hacking It as Well
AI Agents Are Getting Better at Writing Code—and Hacking It as Well

WIRED

time3 days ago

  • Business
  • WIRED

AI Agents Are Getting Better at Writing Code—and Hacking It as Well

Jun 25, 2025 12:58 PM One of the best bug-hunters in the world is an AI tool called Xbow, just one of many signs of the coming age of cybersecurity automation. Photo-Illustration:The latest artificial intelligence models are not only remarkably good at software engineering—new research shows they are getting ever-better at finding bugs in software, too. AI researchers at UC Berkeley tested how well the latest AI models and agents could find vulnerabilities in 188 large open source codebases. Using a new benchmark called CyberGym, the AI models identified 17 new bugs including 15 previously unknown, or 'zero-day,' ones. 'Many of these vulnerabilities are critical,' says Dawn Song, a professor at UC Berkeley who led the work. Many experts expect AI models to become formidable cybersecurity weapons. An AI tool from startup Xbow currently has crept up the ranks of HackerOne's leaderboard for bug hunting and currently sits in top place. The company recently announced $75 million in new funding. Song says that the coding skills of the latest AI models combined with improving reasoning abilities are starting to change the cybersecurity landscape. 'This is a pivotal moment,' she says. 'It actually exceeded our general expectations.' As the models continue to improve they will automate the process of both discovering and exploiting security flaws. This could help companies keep their software safe but may also aid hackers in breaking into systems. 'We didn't even try that hard,' Song says. 'If we ramped up on the budget, allowed the agents to run for longer, they could do even better.' The UC Berkeley team tested conventional frontier AI models from OpenAI, Google, and Anthropic, as well as open source offerings from Meta, DeepSeek, and Alibaba combined with several agents for finding bugs, including OpenHands, Cybench, and EnIGMA. The researchers used descriptions of known software vulnerabilities from the 188 software projects. They then fed the descriptions to the cybersecurity agents powered by frontier AI models to see if they could identify the same flaws for themselves by analyzing new codebases, running tests, and crafting proof-of-concept exploits. The team also asked the agents to hunt for new vulnerabilities in the codebases by themselves. Through the process, the AI tools generated hundreds of proof-of-concept exploits, and of these exploits the researchers identified 15 previously unseen vulnerabilities and two vulnerabilities that had previously been disclosed and patched. The work adds to growing evidence that AI can automate the discovery of zero-day vulnerabilities, which are potentially dangerous (and valuable) because they may provide a way to hack live systems. AI seems destined to become an important part of the cybersecurity industry nonetheless. Security expert Sean Heelan recently discovered a zero-day flaw in the widely used Linux kernel with help from OpenAI's reasoning model o3. Last November, Google announced that it had discovered a previously unknown software vulnerability using AI through a program called Project Zero. Like other parts of the software industry, many cybersecurity firms are enamored with the potential of AI. The new work indeed shows that AI can routinely find new flaws, but it also highlights remaining limitations with the technology. The AI systems were unable to find most flaws and were stumped by especially complex ones. 'The work is fantastic,' says Katie Moussouris, founder and CEO of Luta Security, in part because it shows that AI is still no match for human expertise—the best of the model and agent combination (Claude and and OpenHands) were only able to find around 2 percent of the vulnerabilities. 'Don't replace your human bug hunters yet,' Moussouris says. Moussouris says she is less worried about AI hacking software than companies investing too much in AI at the expense of other techniques. Brendan Dolan-Gavitt, an associate professor at New York University Tandon and a researcher at Xbow, says the new work shows realistic zero-day discovery across a relatively large amount of code using a wide range of AI-powered tasks. Dolan-Gavitt says he expects AI to drive an uptick in attacks involving zero-day exploits. 'That's rare right now, because there are very few people who have the expertise to find new vulnerabilities and build exploits for them,' he says. 'I think the agentic stuff is fascinating for zero-day discoveries,' says Hayden Smith, a cofounder of Hunted Labs, a startup that provides various tools including some incorporating AI for analyzing code for weaknesses. Smith adds that as it becomes possible for more people to discover vulnerabilities with AI it will be more important to ensure that those vulnerabilities are disclosed responsibly. In work posted online in May, Song and other researchers measured the capacity for AI models to find bugs that earned cash payouts through bug-bounty rewards. The effort showed that these tools could potentially earn tens of thousands of dollars. Claude Code, from Anthropic, was most successful, finding bugs worth $1,350 on bug bounty boards and designing patches for vulnerabilities worth $13,862 for a cost of a few hundred dollars in API calls. In a blog post in April, Song and several other AI security experts warn that steadily improving models are likely to benefit attackers over defenders in the near future. This could make it especially important to closely track how capable these tools are becoming. To this end, Song and other researchers have also established the AI Frontiers CyberSecurity Observatory, a collaborative effort that will track the capabilities of different AI models and tools through several benchmarks. Among all the AI risk domains, cybersecurity is going to be one of the first that could become a major problem, Song says. How do you feel about using AI tools to test software vulnerabilities? Are the benefits worth the risk of making it easier for hackers too? Let me know in the comments below, or email me at hello@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store