
Artificial Intelligence For Business: Another Good Intro To The Issues
Artificial Intelligence for Business, Kamales Lardi, is another good introductory book to the subject that suffers the same weaknesses as most of the other books I've read on the subject. Let's start with the good. The book is a very accessible, easy to read review of what business management wants to know about artificial intelligence (AI).
Artificial Intelligence for Business, Kamales Lardi
At a high level, the review of AI is good, but ignore the details. I'm not sure if it's because of confusion or a consultant's need to use the buzzwords, but I'm not thrilled with some of the details. For instance, machine learning isn't really limited to AI and was around in the business intelligence (BI) era. One thing good about the book is that it does mention BI, but doesn't focus on the area I've mentioned in previous reviews. A lot being pitched for AI's value has been done by BI for decades, including categorization and clustering. The difference is the volume of data that can lead to both higher precision and higher costs. It's up to management to look at the necessary precision for a problem and decide if the ROI for BI or AI is better in each situation. Both AI academics and consultants want to push it, but remember it's a key part of a modern solution and not a panacea.
A section of the book I really liked was chapter four, ethics. Ms. Lardi does a very good job covering both the concepts and examples. It's the 'must read' of the book. The chapter before that is ok, where it covers AI working with other modern technologies. Again, at a high level, it's good; but the details are questionable. For instance, distributed ledgers are a key component of blockchain but that isn't clearly defined. While distributed ledgers in supply chains and elsewhere are valuable, the examples I've seen have only uses that because consensus slows down real business processes and isn't really needed. Again, I'm suggesting the reason that isn't made clear is the 'need' to push the blockchain buzzword.
Another mixed blessing is the chapter on the future of work. While the author does make an excellent case for massively increased unemployment, that case is mitigated with the usual apologia that 'AI-driven automation does not substitute human labour completely … the human workforce will be able to focus on complex tasks.' As previously articles in this column, and those of plenty of other writers, have pointed out, there is a major problems with that thesis.
People can do the complex tasks in a process because they began as rookies with simpler tasks and moved up to the complex ones as they gained skill. If AI does the simple tasks, how are humans to learn the complex ones. Are business owner more likely to take new hires and spend significant time training them for the complex task or demand AI that moves upstream and allows them to replace all employees? Yes, that's a rhetorical question.
The rest of the book is a good explanation of what's needed to begin the process of expanding AI's use in business and, of course, setting up the reasons why a consultant can help the reader. The first part is good. The second is neither good nor bad, just what is to be expected.
This is another book where the reader should always remember the author's background and purpose. It's far better than many coming out of academia, think tanks, and the blend of the two – people who came from academia, made a bunch of money at a startup without really understanding business, and who now think they know everything. The author is a consultant in the industry. The purpose of the book is to take her real life business experience, explain it to her market and, of course, drum up business. Remember that and it will be a positive read.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
China wants AI in expanded trade deal with Australia
Strengthening ties between Chinese and Australian artificial intelligence researchers could be on the agenda when the prime minister visits China this week, as Beijing seeks to capitalise on trade tensions with the US. With President Donald Trump's tariffs straining relations with Australia's traditionally closest ally, China's top diplomat in Australia Xiao Qian has called for greater collaboration in fields like AI, healthcare and green energy under a revised free-trade deal between the two nations. Prime Minister Anthony Albanese's visit to China on Saturday comes as the Sino-Australian relationship continues to build following a downturn in relations under former Liberal prime minister Scott Morrison. "China and Australia are natural partners with complementary economic strengths," Mr Xiao wrote in an opinion piece published in the Australian Financial Review on Monday. "Standing at a new historical starting point, now is the time to advance bilateral relations with steady progress." Trade volumes between the two nations have bounced back after China lifted sanctions on Australian exports. The ambassador believes Mr Albanese's visit marks an opportunity to broaden the terms of the 10-year-old free-trade agreement. "We are willing to review the agreement with a more open attitude and higher standard, further consolidate co-operation in traditional areas such as agriculture and mining, and actively explore new growth areas in emerging fields like artificial intelligence, healthcare, green energy, and the digital economy, elevating practical co-operation to new heights," Mr Xiao wrote. The promotion of AI ties, amid the Albanese government's agenda to boost productivity, follows similar provisions in recently signed trade deals between Australia and partners such as Singapore, the UK and the UAE. These clauses encourage sharing AI research and commercialisation opportunities between the countries, as well as promoting its responsible use. There are attractive opportunities to deepen research collaboration in the fundamental science of AI, even though there are challenges to expanding the use of Chinese AI programs in Australia, said UNSW Professor Toby Walsh. "It's going to be very hard for us to have too deep relationships within terms of AI, because you can touch upon things like data sovereignty and various other things that we value," the AI expert told AAP. "It's not like just sending them gold and they take it, and that's the end of the partnership. "Sharing technologies like AI could pose significant national security and other risks." Allowing Chinese tech companies access to the Australian market has been a sore spot in the bilateral relationship. In 2018, then-Liberal prime minister Malcolm Turnbull banned the Chinese tech giant Huawei from developing 5G infrastructure in Australia over concerns the Chinese government could force the company to hand over Australians' data or interfere with the network. The decision prompted strenuous protests from Beijing and was a factor behind a subsequent diplomatic fallout. Prof Walsh said there were still areas where collaboration could be beneficial without forfeiting Australian security. "It's about exchanging people, it's training, it's us going to work with them and them coming to work with us," he said. "So it's things that we've always done in terms of scientific exchange, supercharging our science, supercharging their science, and then building our own business off the back of that scientific knowledge. "China will be interested in partnering with us. "We have wonderful medical data, and we have a joined-up healthcare system. "There's huge value in those national data sets we have that no one else has." As the US drives a wedge through a fragmenting global order, Mr Xiao framed China as a like-minded partner for Australia - one that shares Australia's interests in pushing back against unilateralism and protectionism. China is willing to work with Australia to strengthen multilateral organisations like the United Nations and ASEAN, safeguard regional peace and the international rules-based order, and advocate for free trade, the ambassador said. Assistant Trade Minister Matt Thistlethwaite said the government was seeking to strengthen access to China - Australia's largest trading partner - in the best interests of Australians.


Forbes
4 hours ago
- Forbes
I Want AI In My Business In The Best Way
typing on laptop It's exciting times, and challenging times, for business. Everyone from the C suite on down is scrambling to figure out how to use brand new tools and ideas to their advantage. For the rank and file, people below management level, the imperative is to justify their own work, by learning how AI applies to any given role (I cite Toby Lutke's Shopify memo). Managers and leaders, on the other hand, have a slightly different goal – they have to figure out how to use AI to the benefit of the organization as a whole. So how do you get confidence for, as a fortune cookie might say, these uncertain times? Researching AI One way to start is to learn about the technology in general, to start becoming knowledgeable on what the LLMs do, and why. Just for example: I came across this list from Codemotion of common algorithm components and stochastic ideas using in AI/ML: Now, if you're in a leadership role practically, it's to your benefit to know and understand these terms. They represent a short survey of how we started using AI for things like restaurant recommendations, analysis of performance reviews, and decision support. In other words, if you can explain each of these types of machine learning mechanisms, you're closer to the top of the pile when it comes to brainstorming on AI. Tips from LLM Engines What do MS Copilot and ChatGPT have to say about the issue? After all, you're trying to understand them, in a way. I got this by asking Copilot to enumerate some tips for better AI adoption: (For sourcing, Copilot cited Codemotion and analyticsinsight, along with giants Microsoft and IBM, and our own work at Forbes.) Then I asked ChatGPT and got this (I forgot to tell it not to be so wordy): Start Small and Prioritize High-Impact Use Cases Invest in Clean, Well-Organized Data Upskill Your Workforce Choose the Right Partners and Technology Stack Focus on Change Management and Clear Communication After Human Review… I was looking at how these pieces of advice overlap. You could say that 'define clear objective and scope' from Copilot tracks to ChatGPT's #5 tip on clear communication. As for 'Choose the Right Partners and Technology Stack' (from ChatGPT) there's Copilot's exhortation to have the right tools on hand. I'll leave it up to you, the reader, to decide if these recommendations are overly generic or not. More on AI Adoption Then there's this panel discussion from IIA, moderated by Paul Baier of GAI Insights. '(You should have) practical applications of evolving technology, but also have dedicated focus on your own plan execution,' said panelist Venkat Vedam. '(You should have) use cases and business cases mapped out … this year, next year, but at the same time, we don't want to lose on the skills gaps.' The panel also discussed shadow IT, where employees may be using tools not explicitly endorsed by the organization. 'I feel like shadow IT is not a problem,' Vedam said. 'It's more of an opportunity … the reason shadow IT exists is because the employees are not getting the tools they need … it's also manifesting in a slightly more structured fashion.' He explained. 'There's a bunch of engineers and developers who are servicing a small set of users that are not technically part of the technology organization,' he said. 'The reason why the shadow IT works is (that) the people who are implementing those tools know the business problems well, and have the flexibility to adopt new technologies. (The goal is) to build an operating model around it … (and) to have a governance process to take what works in the shadow IT and make it real.' 'With everything changing so fast, I think it's hard to (have shadow IT because) your organization doesn't really want that so much anymore,' said panelist Joan LaRovere. 'What is the problem you're trying to solve? And … do we need to think about other vendors or internal builds? … you (should) know what you need in your tech stack to actually solve the problems your organization needs to solve, and you need that oversight.' 'I think what you're trading off against is security,' added panelist Tomas Reimers. 'And so if your employees are bringing in tools that have access to customer data or personal health information, that's bad. If they're using AI tools to make restaurant reservations for a meeting they have at noon, it probably doesn't matter.' The Spread of Information Later, Reimers talked about observing tech processes and interactions to get a better bird's eye view of what's happening. 'One of my favorite graphs we have in the office is, whenever we go into an organization, we can actually map the social network of developers that talk to each other, one of the artifacts of working in development. And then you can see where it's adopted. And it always looks like it starts at a node and it spreads out from there.' LaRovere mentioned the value of broader collaboration, which is another point that resonates with me in terms of offering part of a road map. 'I think one of the best things … is bringing people together and sharing either what they've done, showcasing what they've done, testing different things, creating that, what we call a learning community,' she said. Your Own Business Case I'll end with this: part of what I've learned over several decades of being around technology is that most new tools can either help or hinder a business (if you've read a good number of these blogs, you may have read this already) in terms of practical integration. There's usually a learning curve. If you don't prepare staff, you could be in for a lot of trouble. And then there's fitting your applications to your business need, which is not a one-size-fits-all or cookie-cutter type of thing. But maybe this set of tips, from people, the web, and LLMs, is a good start.


Forbes
17 hours ago
- Forbes
Silence Broken As Whistleblowers Fuel Accountability
Building a company isn't just about market fit or fundraising milestones — it's about fostering a culture where people feel empowered to raise concerns and call out wrongdoing when they see it. For entrepreneurs and startup leaders, embracing a 'see something, say something' mindset isn't simply an ethical box to check. Encouraging employees to speak up isn't just an ethical choice — it's a way to build lasting success, strengthen company culture, and prevent reputational or operational harm before it starts. Yet many founders overlook just how much effort it takes to lay this cultural foundation. They focus on product-market fit, hiring, or scaling strategies — while assuming that ethical behavior will naturally fall into place. In truth, the opposite is often true: without clear signals from leadership, a startup's speed and intensity can create conditions where ethical missteps go unnoticed or unaddressed. The silent risks hiding inside startups Startups move fast — often too fast for their own good when it comes to ethics. Founders juggle multiple roles, teams adjust on the fly, and influence tends to concentrate in just a few hands. So when a problem surfaces, whether it's a small policy slip or a major ethical breach, the impact doesn't stay contained. It can spread across the whole company before anyone realizes what's happened. In many startups, the voices leaders most need to hear go quiet — not because people don't care, but because they're unsure or afraid. Employees might hesitate to speak up out of fear of retaliation, damaging their reputation, or simply being seen as a troublemaker. And when leadership doesn't create clear, visible support for raising concerns, small problems are often left to fester until they turn into much bigger risks. One real danger here is the normalization of deviance — when small ethical compromises become routine, paving the way for larger issues down the line. Leaders must remain vigilant and intentionally create systems that surface problems early, rather than hoping concerns will magically rise to the top. Creating a culture of accountability isn't just about installing a hotline or drafting a code of conduct. It's about embedding ethical leadership and open communication into the DNA of the company — from the founder's behavior to the tools the organization uses. Four ways to make speaking up part of your culture To build an ethical, accountable startup culture, leaders need more than good intentions. They need actionable strategies — concrete steps that move beyond vague values statements and translate into everyday practices employees can trust. Here are four essential ways entrepreneurial leaders can turn 'see something, say something' from a catchphrase into a lived, thriving part of their company culture. 1. Champion a culture of open communication The best leaders don't just push information down the chain — they create space for real conversations. They make sure people know their input matters, even when the company is facing tough decisions or uncertainty. By following through on promises and staying open to feedback, leaders can slowly rebuild the trust that makes employees feel safe enough to raise concerns. When workplaces intentionally create space for honest dialogue — through team meetings, listening sessions, or routine check-ins — they lay the groundwork for a culture where people feel safe raising tough issues. 2. Lead by ethical example Ethical leadership goes beyond good intentions; it's about daily action. Founders and leaders set the tone by upholding integrity, fairness, and respect — not just in policies, but in how they handle power, make decisions, and treat others. Modeling ethical behavior signals to teams that doing the right thing matters, even when shortcuts might bring faster wins. When leaders stay consistent with their values, they don't just strengthen trust — they also draw in great talent and earn the loyalty of both employees and customers alike. 3. Establish clear and trusted reporting mechanisms Secure, anonymous reporting systems reduce the risk of retaliation, which has historically cost companies over $20 million in legal and operational fallout. Yet despite 52% of employees witnessing or experiencing misconduct, many choose not to report due to fear. As Sara Kennedy, a compliance expert at StarCompliance, explains, tools like theirs help companies implement configurable, confidential systems that connect the dots across employee activities while protecting anonymity. When combined with clear communication, training, and leadership follow-through, these platforms create a psychologically safe environment where employees trust that speaking up will lead to fair action, protecting both people and the business. 4. Recognize 'seeing something, saying something' as a strength While 97% of employees say they would report misconduct if they saw it, only 50% actually do. Why? Almost half fear retaliation, and nearly as many believe nothing will change. Leaders can flip this narrative by framing reporting as an act of strength and loyalty, not disloyalty. Recognizing and rewarding employees who raise concerns, avoiding retaliatory behaviors (even perceived ones), and making ethics a regular part of team conversations all help create a culture where transparency becomes the norm — not the exception. Why accountability gives startups a competitive edge In startups, every decision matters — and ethical leadership is at the heart of lasting success. When founders focus on transparency, offer safe ways for employees to report concerns, and lead with accountability, they set the stage for companies built to last — not just to check compliance boxes. A startup's ability to survive often hinges as much on the integrity of its people as on the strength of its innovations. The leaders who recognize this aren't just building businesses; they're shaping cultures that can stand the test of time.