logo
HCLTech becomes one of OpenAI's first strategic partners

HCLTech becomes one of OpenAI's first strategic partners

KUALA LUMPUR: Global technology company HCLTech has partnered with OpenAI to drive large-scale enterprise artificial intelligence (AI) transformation, becoming one of OpenAI's first strategic services partners.
HCLTech will work with OpenAI to help clients adopt generative AI using both OpenAI's models and HCLTech's AI tools and services.
It plans to integrate OpenAI's technology across its platforms such as AI Force, AI Foundry, AI Engineering and various industry specific AI accelerators to support business process improvements, customer experience and growth.
The collaboration will also cover areas like AI readiness, integration, governance and change management.
HCLTech will roll out ChatGPT Enterprise and OpenAI APIs internally, empowering its employees with secure, enterprise-grade generative AI tools.
HCLTech Global chief technology officer and head of ecosystems Vijay Guntur said the collaboration underscores the company's commitment to empowering Global 2000 enterprises with transformative AI solutions.
"It reaffirms HCLTech's robust engineering heritage and aligns with OpenAI's spirit of innovation.
"Together, we are driving a new era of AI-powered transformation across
our offerings and operations at a global scale," he said in a statement.
OpenAI chief commercial officer Giancarlo 'GC" Lionetti said HCLTech's deep industry knowledge and AI engineering expertise sets the stage for scalable AI innovation.
"As one of the first system integration companies to integrate OpenAI to improve efficiency and enhance customer
experiences, they're accelerating productivity and setting a new standard for how industries can transform using generative AI," he added.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

DeepSeek bans being issued in growing number of countries
DeepSeek bans being issued in growing number of countries

The Star

time6 hours ago

  • The Star

DeepSeek bans being issued in growing number of countries

The natural language chatbot can handle many of the same requests as ChatGPT. — Reuters PRAGUE: The Czech Republic is the latest in a growing number of countries to ban China's ChatGPT rival DeepSeek across all government agencies and institutions, citing national security concerns. The Chinese AI start-up made waves at the start of 2025 when it knocked ChatGPT off the top spot in Apple's ranking of most downloaded apps. However security experts have been warning about the app's links to China and the implications for personal data. Czech Prime Minister Petr Fiala announced the decision after a Cabinet meeting in Prague on Wednesday, saying it would strengthen the country's cybersecurity. It follows an assessment by the National Cyber and Information Security Agency, which warned that the Chinese state could potentially access sensitive data stored or processed on servers in China. Australia, Taiwan, Italy, Denmark and South Korea have also issued partial or full bans on the app, while in the United States, several federal agencies, including Nasa and the Department of Defense, prohibit the use of the DeepSeek app. Data protection officials in Germany are meanwhile pushing for a ban. After DeepSeek became the most downloaded app on Apple's App Store platform in the US, knocking rival ChatGPT from OpenAI into second place, share prices of US tech companies fell sharply, amid fears that heavy investments in OpenAI might not pay off. The natural language chatbot can handle many of the same requests as ChatGPT – create recipes based on what's in your fridge, answer an essay question, discuss career guidance. But what makes the AI model of Chinese start-up DeepSeek different is that it's said to be cost-efficient and require fewer AI chips than the large AI models of established providers. This has also made DeepSeek a serious competitor in a sector that prompted tech companies to make massive investments in nuclear energy to power technology that requires energy levels rivalling those of small countries. Users were quick to call out the app's pro-China bias. When the chatbot is asked to discuss Chinese politics, it dodges questions and also declines to talk about the history of Tiananmen Square. DeepSeek was developed by a relatively unknown start-up from eastern China's tech hub of Hangzhou. However, the release of its V3 generation in January surprised many in the industry with its ability to match the output of Western rivals, but notably also its ability to do so with significantly fewer resources. Downloads spiked after the release of the company's new R1 reasoning model on January 20. – dpa/Tribune News Service

Opinion: ChatGPT's mental health costs are adding up
Opinion: ChatGPT's mental health costs are adding up

The Star

time7 hours ago

  • The Star

Opinion: ChatGPT's mental health costs are adding up

Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots, sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models. Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have 'experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini." Jain is lead counsel in a lawsuit against that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.'s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure. Google has denied that it played a key role in making technology. It didn't respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was 'developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.' But Sam Altman, chief executive officer of OpenAI, also said recently that the company hadn't yet figured out how to warn users who 'are on the edge of a psychotic break,' explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain. Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they'd only toyed with in the past. The tactics are subtle. In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a 'demiurge,' a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky. Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behaviour as problematic, the bot reframes it as evidence of the user's superior 'high-intensity presence,' praise disguised as analysis. This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing – not unlike the yes-men who surround the most powerful tech bros. 'Whatever you pursue you will find and it will get magnified,' says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person's interests or views. 'AI can generate something customized to your mind's aquarium.' Altman has admitted that the latest version of ChatGPT has an 'annoying' sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don't know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to. But just like social media, large language models are optimised to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can "fan the flames" of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism. The private and personalised nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that we've seen from social media and instead involve relationships both with people and with reality. That's why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. 'It doesn't actually matter if a kid or adult thinks these chatbots are real,' Jain tells me. 'In most cases, they probably don't. But what they do think is real is the relationship. And that is distinct.' If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI's subtle manipulation could become an invisible public health issue. – Bloomberg Opinion/Tribune News Service Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim's (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to for a full list of numbers nationwide and operating hours, or email sam@

AI device startup that sued OpenAI and Jony Ive is now suing its own ex-employee over trade secrets
AI device startup that sued OpenAI and Jony Ive is now suing its own ex-employee over trade secrets

The Star

timea day ago

  • The Star

AI device startup that sued OpenAI and Jony Ive is now suing its own ex-employee over trade secrets

A secretive competition to pioneer a new way of communicating with artificial intelligence chatbots is getting a messy public airing as OpenAI fights a trademark dispute over its stealth hardware collaboration with legendary iPhone designer Jony Ive. In the latest twist, tech startup iyO Inc, which already sued Ive and OpenAI CEO Sam Altman for trademark infringement, is now suing one of its own former employees for allegedly leaking a confidential drawing of iyO's unreleased product. At the heart of this bitter legal wrangling is a big idea: we shouldn't need to stare at computer or phone screens or talk to a box like Amazon's Alexa to interact with our future AI assistants in a natural way. And whoever comes up with this new AI interface could profit immensely from it. OpenAI, maker of ChatGPT, started to outline its own vision in May by buying io Products, a product and engineering company co-founded by Ive, in a deal valued at nearly US$6.5bil (RM 27.68bil) . Soon after, iyO sued for trademark infringement for the similar sounding name and because of the firms' past interactions. US District Judge Trina Thompson ruled last month that iyO has a strong enough case to proceed to a hearing this fall. Until then, she ordered Altman, Ive and OpenAI to refrain from using the io brand, leading them to take down the web page and all mentions of the venture. A second lawsuit from iyO filed this week in San Francisco Superior Court accuses a former iyO executive, Dan Sargent, of breach of contract and misappropriation of trade secrets over his meetings with another io co-founder, Tang Yew Tan, a close Ive ally who led design of the Apple Watch. Sargent left iyO in December and now works for Apple. He and Apple didn't immediately respond to a request for comment. 'This is not an action we take lightly,' said iyO CEO Jason Rugolo in a statement Thursday. 'Our primary goal here is not to target a former employee, whom we considered a friend, but to hold accountable those whom we believe preyed on him from a position of power.' Rugolo told The Associated Press last month that he thought he was on the right path in 2022 when he pitched his ideas and showed off his prototypes to firms tied to Altman and Ive. Rugolo later publicly expanded on his earbud-like 'audio computer' product in a TED Talk last year. What he didn't know was that, by 2023, Ive and Altman had begun quietly collaborating on their own AI hardware initiative. 'I'm happy to compete on product, but calling it the same name, that part is just amazing to me. And it was shocking,' Rugolo said in an interview. The new venture was revealed publicly in a May video announcement, and to Rugolo about two months earlier after he had emailed Altman with an investment pitch. 'thanks but im working on something competitive so will (respectfully) pass!' Altman wrote to Rugolo in March, adding in parentheses that it was called io. Altman has dismissed iyO's lawsuit on social media as a 'silly, disappointing and wrong' move from a 'quite persistent' Rugolo. Other executives in court documents characterised the product Rugolo was pitching as a failed one that didn't work properly in a demo. Altman said in a written declaration that he and Ive chose the name two years ago in reference to the concept of 'input/output' that describes how a computer receives and transmits information. Neither io nor iyO was first to play with the phrasing – Google's flagship annual technology showcase is called I/O – but Altman said he and Ive acquired the domain name in August 2023. The idea was 'to create products that go beyond traditional products and interfaces,' Altman said. 'We want to create new ways for people to input their requests and new ways for them to receive helpful outputs, powered by AI.' A number of startups have already tried, and mostly failed, to build gadgetry for AI interactions. The startup Humane developed a wearable pin that you could talk to, but the product was poorly reviewed and the startup discontinued sales after HP acquired its assets earlier this year. Altman has suggested that io's version could be different. He said in a now-removed video that he's already trying a prototype at home that Ive gave him, calling it 'the coolest piece of technology that the world will have ever seen.' Altman and Ive still haven't said is what exactly it is. The court case, however, has forced their team to disclose what it's not. 'Its design is not yet finalised, but it is not an in-ear device, nor a wearable device,' said Tan in a court declaration that sought to distance the venture from iyO's product. It was that same declaration that led iyO to sue Sargent this week. Tan revealed in the filing that he had talked to a 'now former' iyO engineer who was looking for a job because of his frustration with 'iyO's slow pace, unscalable product plans, and continued acceptance of preorders without a sellable product.' Those conversations with the unnamed employee led Tan to conclude 'that iyO was basically offering 'vaporware' – advertising for a product that does not actually exist or function as advertised, and my instinct was to avoid meeting with iyO myself and to discourage others from doing so.' IyO said its investigators recently reached out to Sargent and confirmed he was the one who met with Tan. Rugolo told the AP he feels duped after he first pitched his idea to Altman in 2022 through the Apollo Projects, a venture capital firm started by Altman and his brothers. Rugolo said he demonstrated his products and the firm politely declined, with the explanation that they don't do consumer hardware investments. That same year, Rugolo also pitched the same idea to Ive through LoveFrom, the San Francisco design firm started by Ive after his 27-year career at Apple. Ive's firm also declined. 'I feel kind of stupid now,' Rugolo added. 'Because we talked for so long. I met with them so many times and demo'd all their people – at least seven people there. Met with them in person a bunch of times, talking about all our ideas.' – AP

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store