
WorkJam & Google Cloud expand AI tools for frontline staff
The company's AI agent, built on Google Cloud and leveraging Google's Gemini models, is being tailored for industries with complex frontline environments, such as retail, healthcare, and logistics. The collaboration aims to simplify the way frontline teams access knowledge, receive training, and execute tasks in real time.
WorkJam's CEO, Steven Kramer, described the AI initiative as "a redefinition of labour utilisation," adding that it will ensure "the right people are in the right place, at the right time, with intelligent support to balance workloads and drive productivity."
Central to the rollout is WorkJam's AI roadmap, developed on Google Cloud's infrastructure, which provides security, compliance, and scalability required by sectors including manufacturing. The platform is intended to address operational demands by delivering consistent global performance and meeting stringent enterprise requirements.
The AI features are powered by Google's Gemini models, which offer what the companies describe as multimodal intelligence. This approach means the AI agent can interpret and respond to a variety of input types—text, voice, and visual—enabling natural and intuitive interactions for users on the frontline.
The companies emphasise that this multimodal capability is designed to automate day-to-day tasks, retrieve information, and provide on-the-spot assistance, thereby enhancing productivity and reducing obstacles for employees during their shifts.
WorkJam said its vision is to integrate such AI-driven intelligence and automation directly into daily workflows. The aim is to give employees rapid access to necessary information, adaptable training resources, and straightforward task management tools regardless of the user's language or role. The company said this will help streamline operations and allow managers to devote more attention to higher-value responsibilities.
The solution's AI-driven approach is also focused on labour deployment, seeking to ensure that staffing is more precisely aligned with demand, while increasing employee engagement and improving service to customers.
The expanded collaboration allows WorkJam to work closely with Google Cloud, benefiting from early access to AI technology and accelerating the delivery of advanced features to its customers throughout the region.
Steven Kramer, CEO, WorkJam said, "By integrating the powerful reasoning capabilities of Google's Gemini models into the WorkJam platform, we're redefining labour utilisation for frontline teams across the APAC region. This isn't just about smarter scheduling—it's about giving managers and employees real-time support to make better decisions, balance workloads, and drive productivity. With Gemini models, WorkJam ensures the right people are in the right place at the right time, improving operational efficiency while creating a more engaging and productive work environment."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

RNZ News
15 hours ago
- RNZ News
NZ's new AI strategy is long on 'economic opportunity' but short on managing ethical and social risk
By By Andrew Lensen* of Photo: Supplied/Callaghan Innovation The government's newly unveiled National AI Strategy is all about what its title said: "Investing with Confidence". It tells businesses that Aotearoa New Zealand is open for AI use, and that our "light touch" approach won't get in their way. The question now is whether the claims made for AI by Minister of Science, Innovation and Technology Shane Reti - that it will help boost productivity and enable the economy to grow by billions of dollars - can be justified. Generative AI - the kind powering ChatGPT, CoPilot, and Google's video generator Veo 3 - is certainly earning money. In its latest funding round in April, OpenAI was valued at US$300 billion . Nvidia, which makes the hardware that powers AI technology, just became the first publicly traded company to surpass a $4 trillion market valuation . It'd be great if New Zealand could get a slice of that pie. New Zealand doesn't have the capacity to build new generative AI systems, however. That takes tens of thousands of NVIDIA's chips, costing many millions of dollars that only big tech companies or large nation states can afford. What New Zealand can do is build new systems and services around these models, either by fine-tuning them or using them as part of a bigger software system or service. The government isn't offering any new money to help companies do this. Its AI strategy is about reducing barriers, providing regulatory guidance, building capacity, and ensuring adaptation happens responsibly. But there aren't many barriers to begin with. The regulatory guidance contained in the strategy essentially said "we won't regulate". Existing laws are said to be "technology-neutral" and therefore sufficient. As for building capacity, the country's tertiary sector is more under-funded than ever, with universities cutting courses and staff. Humanities research into AI ethics is also ineligible for government funding as it doesn't contribute to economic growth. The issue of responsible adoption is perhaps of most concern. The 42-page " Responsible AI Guidance for Businesses " document, released alongside the strategy, contains useful material on issues such as detecting bias, measuring model accuracy, and human oversight. But it is just that - guidance - and entirely voluntary. This puts New Zealand among the most relaxed nations when it comes to AI regulation, along with Japan and Singapore . At the other end is the European Union, which enacted its comprehensive AI Act in 2024, and has stood fast against lobbying to delay legislative rollout. The relaxed approach is interesting in light of New Zealand being ranked third-to-last out of 47 countries in a recent survey of trust in AI . In another survey from last year, 66 percent of New Zealanders reported being nervous about the impacts of AI . Some of the nervousness can be explained by AI being a new technology with well documented examples of inappropriate use, intentional or not. Deepfakes as a form of cyberbullying have become a major concern. Even the ACT Party, not generally in favour of more regulation, wants to criminalise the creation and sharing of non-consensual, sexually explicit deepfakes. Generative image, video, and music creation is reducing the demand for creative workers, even though it is their very work that was used to train the AI models. But there are other, more subtle issues, too. AI systems learn from data. If that data is biased, then those systems will learn to be biased, too. New Zealanders are right to be anxious about the prospect of private sector companies denying them jobs, entry to supermarkets , or a bank loan because of something in their pasts. Because modern deep learning models are so complex and impenetrable, it can be impossible to determine how an AI system made a decision. And what of the potential for AI to be used online to mislead voters and discredit the democratic process, as the New York Times has reported, may have occurred already in at least 50 cases. The strategy is essentially silent on all of these issues. It also doesn't mention Te Tiriti o Waitangi/Treaty of Waitangi. Even Google's AI summary tells me this is the nation's founding document, laying the groundwork for Māori and the Crown to coexist. AI, like any data-driven system, has the potential to disproportionately disadvantage Māori if it involves systems from overseas designed (and trained) for other populations. Allowing these systems to be imported and deployed in Aotearoa New Zealand in sensitive applications - healthcare or justice, for example - without any regulation or oversight risks worsening inequalities even further. What's the alternative? The EU offers some useful answers. It has taken the approach of categorising AI uses based on risk : This feels like a mature approach New Zealand might emulate. It wouldn't stymie productivity much - unless companies were doing something risky. In which case, the 66 percent of New Zealanders who are nervous about AI might well agree it's worth slowing down and getting it right. Andrew Lensen is a Senior Lecturer in Artificial Intelligence at Te Herenga Waka - Victoria University of Wellington -This story was originally published on The Conversation.


Newsroom
15 hours ago
- Newsroom
NZ can't afford to be careless with its AI strategy
Opinion: The Government's new strategy for AI was announced last week to a justifiably flat reception. As far as national-level policy goes, the document is severely lacking. One of the main culprits is prominently displayed at the end of Science, Innovation and Technology Minister Shane Reti's foreword: 'This document was written with the assistance of AI.' For those with some experience of AI, this language is generally recognised to be a precursor to fairly unexceptional outputs. The minister's commitment to walking the talk on AI, as he says, could have been seen as admirable if the resulting output was not so clumsy, and did not carry so many of the hallmarks of AI-generated content. To be blunt, the document is poorly written, badly structured, and under-researched. It cites eight documents in total, half of which are produced by industry – an amount of research suitable for a first year university student. It makes no effort to integrate arguments or sources critical of AI, nor does it provide any balanced assessment. This same carelessness is exhibited in the web version of the document which has scarcely been edited, and includes a number of errors like 'gnerative AI' as opposed to generative AI. It also contains very little actual strategy or targets. It reads more like a dossier from Meta, Open AI or Anthropic and is filled with just as much industry language. In short, it is entirely unsuitable to be the defining strategic document to guide New Zealand's engagement with what it accurately defines as 'one of the most significant technological opportunities of our time'. Especially not in a global climate where there is an ever-growing appreciation for the potential harms of AI, as seen in the growing number of class actions in the United States, or resources like the AI Incident Database. AI harm and job displacement are very real and important problems. Yet, in the Strategy for AI they are described as dystopian scenarios being used by the media to compound uncertainty. The problem is not necessarily that AI was used to assist the production of the document, it is the extent to which it was used, and how. AI has a number of useful applications such as spellchecking, assisting with structure, and providing counter-points which can help further flesh out your writing. However, it is inappropriate to use generative AI to produce national-level policy. What is particularly alarming is that anyone with a ChatGPT licence and about a minute of spare time could very easily produce a document similar in content, tone and structure to the government's strategy. Thankfully bad policy can be improved, and hopefully this will be eventually. But, by far the most damning aspect of the strategy is the underlying notion that generative AI should have a key role in developing policy in New Zealand. There is an unappealing hubris in thinking that New Zealand's public servants, many of whom are phenomenally skilled, deeply caring, and out of work, could be replaced or meaningfully augmented by such a ham-handed and poorly thought out application of generative AI. Unfortunately, it is likely that the strategy's fast and efficient rollout will be seen by the Government as a success regardless of the quality of the output. This will no doubt embolden it to continue to use generative AI as an aid in the production of policy in future. This is a real cause for concern, as it could be used to justify even more cuts to the public service and further undermine the function of our democracy. Use of generative AI in the development of policy also raises fundamental questions as to what our public service is and should be. It would seem imprudent to employ our public servants on the basis of their care, knowledge, expertise and diligence and then require them to delegate their work to generative AI. A public service defined solely by the pace at which they can deliver, as opposed to the quality of that delivery, is at best antithetical to the goals of good government.

RNZ News
2 days ago
- RNZ News
Samsung is looking into more AI devices - potentially including earrings and necklaces
By Lisa Eadicicco , CNN A man tries on smart glasses capable of real-time translation. Photo: Lian Yi / Xinhua via AFP Samsung is looking into new wearable devices, potentially including earrings and necklaces, amid an industry-wide push to develop new types of AI-powered consumer electronics. AI could enable a new wave of devices that allow users to communicate and get things done more quickly without having to take out a phone, Won-joon Choi, chief operating officer for Samsung's mobile experience division, told CNN this week. For Samsung, these types of new devices could be something you wear around your neck, dangle from your ears or slip on your finger . "We believe it should be wearable, something that you shouldn't carry, (that) you don't need to carry," he said. "So it could be something that you wear, glasses, earrings, watches, rings and sometimes (a) necklace." Choi's comments underscore the opportunity tech giants see to develop new hardware products around AI, a technology that some say is expected to be as impactful as the internet itself. AI services like OpenAI's ChatGPT and Google's Gemini have moved beyond basic text prompts and are getting better at handling complex tasks. That's led tech giants to look into devices that require less manual input than smartphones, which largely require typing and swiping on screens. That search is already in full swing, starting with smart glasses. Meta has touted its AI-powered Ray-Ban smart glasses, of which 2 million have been sold since 2023, as a success. The Facebook parent also recently acquired a minority stake in Ray-Ban parent company EssilorLuxottica, according to Bloomberg, further indicating the company's interest in AI-powered wearable gadgets. Samsung, Google and Snap are also developing smart glasses, while OpenAI and ex-Apple designer Jony Ive are collaborating on a mysterious new AI device for next year. When CNN asked Choi whether Samsung is actively looking into developing earrings or other smart jewelry, like a pendant or bracelet, Choi said the company is "looking at all kinds of possibilities". "What do you wear? Glasses, earrings… necklaces, watches and rings, something like those," he said. However, that doesn't mean those possibilities will become products. Samsung and other tech companies routinely develop prototypes and evaluate new technologies internally without bringing them to market. Some tech startups have already unsuccessfully tried to develop new AI gadgets to replace smartphones at certain tasks. The Humane AI Pin, created by a pair of Apple veterans, flopped because of its high price and buggy performance. The company shut down the product and sold parts of itself to computing giant HP in February. Another device called the Rabbit R1 also launched to a lackluster reception last year, although it's undergone significant updates since then. And a startup called Friend created an AI necklace that's meant to be a digital companion, although its launch has been delayed until the third quarter of this year. Samsung's approach, unlike some of these options, will involve a device that's a companion to your phone rather than a standalone product, similar to the company's smartwatches, according to Choi. And the company's upcoming smart glasses, which it hasn't revealed many details about yet, could be just the start. "We are actively working on glasses, but some people do not want to wear glasses because they change their look," he said. "So we are also exploring other types of devices." - CNN