logo
Table of Experts — Artificial intelligence in enterprise: Houston Tech leaders on what's real, what's next, and what's at risk

Table of Experts — Artificial intelligence in enterprise: Houston Tech leaders on what's real, what's next, and what's at risk

On June 4, the Houston Business Journal gathered a panel of Houston-area CIOs and digital strategy experts for a roundtable discussion on artificial intelligence. This group of industry leaders explored how AI is already being applied across business verticals, the risks and limitations they're navigating, and what leaders should be doing now to prepare for the exponential pace of change.
Heather Orrico, Vice President, Comcast Business: What's one way your organization is using AI that sets it apart, but that also keeps you cautious as you scale?
Atif Riaz, CIO/CTO, Murphy Oil Corporation: Besides general productivity use cases, one novel use case for my team was utilizing AI to pick high performing teams for different projects. We already had the personality assessment data, and AI helped us figure out who might be best suited for certain roles on a project. Of course you always have to watch out for hallucinations. AI has huge potential for building better teams, but you have to set boundaries.
Randy Volkin, CIO, Perry Homes: It's not important that every step forward has to be an innovation. There are a lot of commoditized tools and practices in AI that we are adopting that can make us better. There's a sense of shared exploration across industries right now, especially at the corporate level. If you can tap into that, you can make a real difference without high degrees of risk.
Ashok Kurian, AVP of Data and AI Innovation, Texas Children's Hospital: HIPAA has very rigid rules about data, but we're further along than many other healthcare institutions. We have many examples of AI uses that improve care, reduce administrative overhead and ultimately allow our world-renowned clinicians to spend more time with patients and patient families. That's why people come to Texas Children's Hospital, we explore every avenue to ensure our patients are treated with the best possible quality of care.
Jeff Green, CIO, Strike, LLC: We use AI to speed up risk assessments on bids. That process used to be a six-week, five-figure engagement. Now we feed the info into our AI model and get what we need in a fraction of the time. That frees up time and employee resources to go fix problems rather than just diagnose them, and it has reduced our legal cost. Everyone's happy about those benefits.
Keith Tomshe, Manager of Digital Video, KHOU: We're experimenting with AI for news production, automatically generating versions of a story for broadcast, digital, and social. It's helped in places like SEO tagging and repackaging content. But adoption is slow unless it's built into existing tools. If you tell people 'go try this on your own,' it rarely happens.
Viet Dang, Director of Data Services, Houston-Galveston Area Council: We're still in the early stages of our AI journey. We created a safe space for staff to explore the potential and possibilities of three large language models—Claude, ChatGPT, and Gemini. As a public-serving organization, our adoption has been thoughtful and cautious. We're especially mindful of protecting sensitive information and ensuring that nothing is shared inappropriately. With most of our funding coming from the federal government, we see AI as an opportunity to improve efficiency, and we're exploring how these tools can complement and streamline the way we serve the region.
Traci Pelter, President & Publisher, Houston Business Journal: What's a tech decision you've made recently that felt like a real turning point?
Ashok Kurian: About five years ago we decided to move more of our workload onto the cloud. It's given us the scale and flexibility we need for AI. The GPU access alone is game-changing for our algorithms.
Jeff Green: We're working on making our unstructured data more usable. Right now, we have documents all over the environments. This DLP project will allow us to get Microsoft Copilot deployed for the company, but we've got to clean up the document sprawl first.
Randy Volkin: We view it as important to continuously improve processes in addition to adopting technology to position the company for continued growth. We recently undertook a major project to transform our ERP platform and felt like we were able to achieve both. It was a very large undertaking but we couldn't be happier with the result.
Keith Tomshe: What's interesting is how some people at the station have gone full steam ahead with AI—like building their own apps to adapt content for different platforms—while others barely touch it. If it's integrated into a platform, people use it. Otherwise, it doesn't stick.
Justin Galbraith, Senior Manager Enterprise Sales, Comcast Business: I've got a customer using AI in video monitoring to prevent theft — like catching suspicious activity before it happens. Anyone else doing things like that? Permit processes, inspections, medical procedures?
Ashok Kurian: We're testing computer vision and image recognition in the field of pediatric radiology, to ensure our radiologists are focused on the most complex situations, and create efficiencies with the common ones.
Atif Riaz: Oil and gas uses cameras for remote monitoring: detecting leaks, fires, or missing safety gear. And now, with software overlays, we don't need specialized cameras. Any video feed can become intelligent through use of AI software.
Heather Orrico, Vice President, Comcast Business: How do you align with leadership on innovation and data security when there are competing priorities?
Atif Riaz: I have learned to reframe digital and security initiatives in terms of exploration, production, or operational success. Once you link IT to business goals, alignment and funding follows.
Randy Volkin: Reputation matters a lot at Perry Homes. But we also know there's no such thing as zero risk. Even the federal government gets breached. So we talk about making good, measured decisions rather than chasing everything shiny. You need to take calculated risks to progress any business but we won't make compromises with customer data or other matters that impact our reputation.
Viet Dang: Our CEO is very much in tune with emerging technology and its adoption. He recognized the value of AI early on and established—and now chairs—our internal AI Governance Committee. We're focused on organization-wide implementation, encouraging our team to explore the best uses of AI, develop effective prompts, and understand how these tools can support their work. At the same time, we're committed to educating staff on responsible use and maximizing the benefits AI can offer.
Jeff Baker, Chief Technology Officer, Socium Solutions: We always bring it back to opportunity cost. What's the ROI of this versus doing nothing? At a certain point, the next thing isn't worth the price.
Traci Pelter, President & Publisher, Houston Business Journal: I'd love to hear a challenge or win from this year. Something that went wel l— or didn't.
Atif Riaz: A few years ago, we made a very intentional decision to empower the business more—to encourage citizen development and give non-IT teams the tools and freedom to innovate. And it worked. More people started building solutions, bringing up ideas, and moving fast. This has been a big win for us.
The challenge with democratizing technology, though, is that not everyone wants to follow the governance process. It's a double-edged sword. You want a technically literate business, but also want to make sure there's agreement for them to stay within the guardrails.
Randy Volkin: Education will be reshaped. AI can personalize learning, give every student the exact input they need in ways that teachers can't. In one of my kid's university classes they had to use AI and show their work, keeping track of each prompt they used to get to the result. Another kid at a different university banned AI entirely. It's such a wild contrast
Keith Tomshe: Copilot is useful, but it lags behind some other tools. That's part of the expectation gap. People think AI is magic.
Casey Kiesewetter, Vice President, Houston Business Journal: What ripple effects are you seeing across your organization?
Jeff Baker: We can respond to anomalies faster. Yesterday, we flagged a potential issue in real time. That kind of agility didn't exist a year ago.
Ashok Kurian: We are exploring the use of natural language processing technology, which allows our clinicians to spend less time behind the computer, and more time with the patient. This also boosts provider satisfaction, as they are spending less time having to document, and more time with their families.
Jeff Baker: Bias is a concern, but it's not unique to AI. It exists in human providers too. At least with AI, we can audit those biases and make better decisions.
Heather Orrico, Vice President, Comcast Business: Fast forward 20 years: What does the balance between humans and AI look like?
Keith Tomshe: I got a Tesla. The self-driving is amazing. I even talked to ChatGPT on the way here to prep for this conversation. It's part of my workflow now.
Ashok Kurian: Artificial Intelligence has come a long way, but what's on the horizon is even more exciting. Innovations are being made today that will lead the world to potential Artificial General Intelligence, where AI will be able to learn on their own, with reasoning capabilities. The uses cases are endless, but we need to also ensure safety and protections with these new discoveries.
Atif Riaz: Some estimates put Artificial General Intelligence (AGI) just 12 months away. That's AI smarter than the smartest human alive. Whether you believe that or not, the urgency is real.
Heather Orrico, Vice President, Comcast Business: What does this mean for soft skills and people leadership in the future?
Randy Volkin: Education will be reshaped. AI can personalize learning, give every student the exact input they need in ways that teachers can't. In one of my kid's university classes they had to use AI and show their work, keeping track of each prompt they used to get to the result. Another kid at a different university banned AI entirely. It's such a wild contrast.
Jeff Baker: I teach a cybersecurity certificate course at UT. The best students aren't always the most technical. They're the ones who can communicate, think critically, and work without a prompt. I hope that skill set is still important in 20 years.
Jeff Green: I hire for critical thinking, not credentials. We once hired an art major who couldn't use Excel, but she became our best business analyst because she was great at problem solving and critical thinking. She knew how to ask the right questions and interpret those answers to produce business outcomes. That is going to set you apart in the future of the job market.
Atif Riaz: We see younger generation nowadays is losing social skills as they are glued to their devices. But if the argument is that robots will get more human-like, that means they'll want social interaction too. So relationships and social skills will still be paramount; that's how you 'get along' with AI.
Moderators
Panelists
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google CEO Sundar Pichai shrugs off Silicon Valley's raging AI talent wars
Google CEO Sundar Pichai shrugs off Silicon Valley's raging AI talent wars

Business Insider

time22 minutes ago

  • Business Insider

Google CEO Sundar Pichai shrugs off Silicon Valley's raging AI talent wars

Google CEO Sundar Pichai brushed off concerns about the company's ability to attract and keep top AI talent during the tech giant's second-quarter earnings call, calling its retention metrics "healthy." On Wednesday, Pichai publicly addressed the latest wave of AI talent wars raging across Silicon Valley. These wars have been supercharged by Meta's announcement of a 'superintelligence' division and the poaching of some researchers with multimillion-dollar pay packages. The competition has become so intense that some analysts worry it could increase the growing costs of staying at the cutting edge of AI. Bernstein analyst Mark Shmulik asked Pichai about recruiting top researchers in the context of "AI-related resource costs" at Google, and how the competition affects the company's ability to retain talent. In response, Pichai said that Google has been through moments like this before and touted that its core metrics remain "healthy." "We continue to look at both our retention metrics, as well as the new talent coming in, and both are healthy," Pichai said on the earnings call. "I do know individual cases can make headlines, but when we look at numbers deeply, I think we are doing well through this moment." Business Insider asked Google if it could share some of those metrics; the tech giant didn't immediately respond to a request for comment. Several members of Meta's new superintelligence team used to work at Google. For example, Meta poached Pei Sun, a researcher who worked on improving Google's Gemini AI assistant and its self-driving car, Waymo. It's not just Meta poaching Googlers. Newer AI startups like OpenAI and Anthropic have also siphoned top talent from Google's DeepMind division, according to a SignalFire report. For example, the report found that it's 11 times more likely that a researcher leaves Google for Anthropic than the other way around. During the earnings call, Pichai said that Google knew what it took to keep top AI researchers happy — and it wasn't all about the money. For example, Pichai said Google is investing more in access to compute, meaning the latest and greatest computer chips. Also, top researchers want to be "at the frontier driving progress, and so the mission, and how state-of-the-art the work is, matters. So that's super important to them," he said.

More teens say they're using AI for friendship. Here's why researchers are concerned
More teens say they're using AI for friendship. Here's why researchers are concerned

CBS News

time22 minutes ago

  • CBS News

More teens say they're using AI for friendship. Here's why researchers are concerned

No question is too small when Kayla Chege, a high school student in Kansas, is using artificial intelligence. The 15-year-old asks ChatGPT for guidance on back-to-school shopping, makeup colors, low-calorie choices at Smoothie King, plus ideas for her Sweet 16 and her younger sister's birthday party. The sophomore honors student makes a point not to have chatbots do her homework and tries to limit her interactions to mundane questions. But in interviews with The Associated Press and a new study, teenagers say they are increasingly interacting with AI as if it were a companion, capable of providing advice and friendship. "Everyone uses AI for everything now. It's really taking over," said Chege, who wonders how AI tools will affect her generation. "I think kids use AI to get out of thinking." For the past couple of years, concerns about cheating at school have dominated the conversation around kids and AI. But artificial intelligence is playing a much larger role in many of their lives. AI, teens say, has become a go-to source for personal advice, emotional support, everyday decision-making and problem-solving. More than 70% of teens have used AI companions and half use them regularly, with 34% reporting daily usage or multiple times a week, according to a new study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly. The study defines AI companions as platforms designed to serve as "digital friends," like Character. AI or Replika, which can be customized with specific traits or personalities and can offer emotional support, companionship and conversations that can feel human-like. But popular sites like ChatGPT and Claude, which mainly answer questions, are being used in the same way, the researchers say. In an interview with "CBS Evening News" on Wednesday, Common Sense founder and CEO Jim Steyer said what struck him about the study is that AI companions are "everywhere in teens' lives." Common Sense's study also found that 11% of teens use AI companions to build up their courage and stand up for themselves, which Steyer said can be a good thing. However, he cautioned that problems arise when the technology replaces human relationships. "Younger kids really trust these AI companions to be like friends or parents or therapists," Steyer said. "They're talking about serious relationships, and these are robots. They're not human beings." As the technology rapidly gets more sophisticated, teenagers and experts worry about AI's potential to redefine human relationships and exacerbate crises of loneliness and youth mental health. "AI is always available. It never gets bored with you. It's never judgmental," says Ganesh Nair, an 18-year-old in Arkansas. "When you're talking to AI, you are always right. You're always interesting. You are always emotionally justified." All that used to be appealing, but as Nair heads to college this fall, he wants to step back from using AI. Nair got spooked after a high school friend who relied on an "AI companion" for heart-to-heart conversations with his girlfriend later had the chatbot write the breakup text ending his two-year relationship. "That felt a little bit dystopian, that a computer generated the end to a real relationship," said Nair. "It's almost like we are allowing computers to replace our relationships with people." In the Common Sense Media survey, 31% of teens said their conversations with AI companions were "as satisfying or more satisfying" than talking with real friends. Even though half of teens said they distrust AI's advice, 33% had discussed serious or important issues with AI instead of real people. Those findings are worrisome, says Michael Robb, the study's lead author and head researcher at Common Sense, and should send a warning to parents, teachers and policymakers. The now-booming and largely unregulated AI industry is becoming as integrated with adolescence as smartphones and social media are. "It's eye-opening," said Robb. "When we set out to do this survey, we had no understanding of how many kids are actually using AI companions." The study polled more than 1,000 teens nationwide in April and May. Adolescence is a critical time for developing identity, social skills and independence, Robb said, and AI companions should complement — not replace — real-world interactions. "If teens are developing social skills on AI platforms where they are constantly being validated, not being challenged, not learning to read social cues or understand somebody else's perspective, they are not going to be adequately prepared in the real world," he said. When asked whether the issue at play is with the AI technology itself or the way kids live in the modern world today, Steyer said he believes it's both. "It's a challenge with how kids live today because they spend so many hours in front of a screen, and when you substitute a machine or a robot for human interaction, you're fundamentally changing the nature of that relationship," Steyer told CBS News. The nonprofit analyzed several popular AI companions in a "risk assessment," finding ineffective age restrictions and that the platforms can produce sexual material, give dangerous advice and offer harmful content. While tCommon Sense's CEO said he supports the growth and innovation of AI, the group doesn't recommend that minors use AI companions. "In terms of its impact on young people, and on families in general, [the study] is an extraordinary finding and one that I think makes us very concerned about kids under the age of 18 being exposed to these kinds of companions," Steyer said. Researchers and educators worry about the cognitive costs for youth who rely heavily on AI, especially in their creativity, critical thinking and social skills. The potential dangers of children forming relationships with chatbots gained national attention last year when a 14-year-old Florida boy died by suicide after developing an emotional attachment to a Character. AI chatbot. "Parents really have no idea this is happening," said Eva Telzer, a psychology and neuroscience professor at the University of North Carolina at Chapel Hill. "All of us are struck by how quickly this blew up." Telzer is leading multiple studies on youth and AI, a new research area with limited data. Telzer's research has found that children as young as 8 are using generative AI and also found that teens are using AI to explore their sexuality and for companionship. In focus groups, Telzer found that one of the top apps teens frequent is SpicyChat AI, a free role-playing app intended for adults. Many teens also say they use chatbots to write emails or messages to strike the right tone in sensitive situations. "One of the concerns that comes up is that they no longer have trust in themselves to make a decision," said Telzer. "They need feedback from AI before feeling like they can check off the box that an idea is OK or not." Arkansas teen Bruce Perry, 17, says he relates to that and relies on AI tools to craft outlines and proofread essays for his English class. "If you tell me to plan out an essay, I would think of going to ChatGPT before getting out a pencil," Perry said. He uses AI daily and has asked chatbots for advice in social situations, to help him decide what to wear and to write emails to teachers, saying AI articulates his thoughts faster. Perry says he feels fortunate that AI companions were not around when he was younger. "I'm worried that kids could get lost in this," Perry said. "I could see a kid that grows up with AI not seeing a reason to go to the park or try to make a friend." Other teens agree, saying the issues with AI and its effect on children's mental health are different from those of social media. "Social media complemented the need people have to be seen, to be known, to meet new people," Nair said. "I think AI complements another need that runs a lot deeper — our need for attachment and our need to feel emotions. It feeds off of that." "It's the new addiction," Nair added. "That's how I see it."

Trump administration plans to give AI developers a free hand
Trump administration plans to give AI developers a free hand

Boston Globe

timean hour ago

  • Boston Globe

Trump administration plans to give AI developers a free hand

Advertisement The report signals that the Trump administration has embraced AI and the tech industry's arguments that it must be allowed to work with few guardrails for the United States to dominate a new era defined by the technology. It is a forceful repudiation of other governments, including the European Commission, that have approved regulations to govern the development of the technology. Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up But it also points to how the administration wants to shape the way AI tools present information. Conservatives have accused some tech companies of developing AI models with a baked-in liberal bias. Most AI models are already trained on copious amounts of data from across the web, which informs their responses, making any shift in focus difficult. On Wednesday afternoon, Trump delivered his first major speech on AI, a technology that experts have said could upend communications, geopolitics and the economy in the coming years. The president also signed executive orders related to the technology. Advertisement 'We believe we're in an AI race,' David Sacks, the White House AI and crypto czar, said on a call with reporters. 'And we want the United States to win that race.' The changes outlined Wednesday would benefit tech giants locked in a fierce contest to produce generative AI products and persuade consumers to weave the tools into their daily lives. Since OpenAI's public release of ChatGPT in late 2022, tech companies have raced to produce their own versions of the technology, which can write humanlike texts and produce realistic images and videos. Google, Microsoft, Meta, OpenAI, and others are jockeying for access to computing power, typically from huge data centers filled with computers that can stress local communities' resources. And the companies are facing increased competition from rivals such as Chinese startup DeepSeek, which sent shock waves around the world this year after it created a powerful AI model with far less money than many thought possible. The fight over resources in Silicon Valley has run alongside an equally charged debate in Washington over how to confront the societal transformations that AI could bring. Critics worry that if left unchecked, the technology could be a potent tool for scammers and extremists and lay waste to the economy as more jobs are automated. News outlets and artists have sued AI companies over claims that they illegally trained their technology using copyrighted works and articles. Trump previously warned of China's potential to outpace American progress on the technology. He has said that the federal government must support AI companies with tax incentives, more foreign investment and less focus on safety regulations that could hamper progress. Advertisement President Biden took one major action on artificial intelligence: a 2023 executive order that mandated safety and security standards for the development and use of AI across the federal government. But hours after his inauguration in January, Trump rolled back that order. Days later, he signed another executive order, 'Removing Barriers to American Leadership in Artificial Intelligence,' which called for an acceleration of AI development by US tech companies and for versions of the technology that operated without ideological bias. The order included a mandate for administration officials to come up with 'an artificial intelligence action plan,' with policy guidelines to encourage the growth of the AI industry. The administration solicited comments from companies while it considered its plan. OpenAI called for the administration to expand its list of countries eligible to import AI technologies from the United States, a list that has been limited by controls designed to stop China from gaining access to American technology. OpenAI and Google called for greater support in building AI data centers through tax breaks and fewer barriers for foreign investment. OpenAI, Google, and Meta also said they believed they had legal access to copyrighted works like books, films and art for training their AI. Meta asked the White House to issue an executive order or other action to 'clarify that the use of publicly available data to train models is unequivocally fair use.' The plan released Wednesday did not include mentions of copyright law. But it did outline a wide range of policy shifts, divided into moves that the administration said would speed up the development of AI, make it easier to build and power data centers and promote the interests of American companies abroad. Advertisement This article originally appeared in

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store