Why you should consider using AI if you've been avoiding it
That's according to Nici Sweaney, CEO of AI consultancy business AI Her Way.
Nici Sweaney says AI tools will redefine how we work, live and interact.
(
Supplied: Nici Sweaney
)
Dr Sweaney, who lives on Yuin Country on the New South Wales South Coast, says using AI systems comes with a competitive advantage.
"We think that if you use it daily in work, you get about a 40 per cent increase in productivity and efficiency," she says.
"If you work a full-time job, that's two extra days of work a week."
She says ChatGPT, Copilot, Claude and Gemini, which are known as large language models, are among the most popular tools
There's a divide in who is using these tools, with men more likely to be using them than women and "about twice as likely to be using [generative AI] in a workplace setting".
Sign up to the ABC Lifestyle newsletter
Get a mid-week boost and receive easy recipes, wellbeing ideas, and home and garden tips in your inbox every Wednesday. You'll also receive a monthly newsletter of our best recipes.
Your information is being handled in accordance with the
Email address
Subscribe
Sandra Peter is an associate professor at the University of Sydney Business School and co-director of the school's Sydney Executive Plus, which focuses on upskilling emerging leaders.
She thinks of large language models "as having a personal assistant" who is knowledgeable, eager to help, polite, but "sometimes does make mistakes".
How to get started
Dr Sweaney recommends people begin by using AI tools in low-stakes ways in their personal lives.
If you're keen to experiment with it at work, low-risk tasks are the best place to start, she says.
Experts tips on how to start using AI tools:
Start with simple and low-stakes personal tasks
Identify tasks you can explain easily to others and don't enjoy
Try using different free systems and find one you prefer
When deciding the tasks that could be delegated to AI in your life, Dr Sweaney suggests making a list of the tasks you do often, which could include responding to emails, prioritising your workload, or writing the grocery list.
If you could explain the task to someone, highlight it. Give it gold star if you don't enjoy doing it. Dr Sweaney says these tasks are "prime territory" for delegating to generative AI.
Dr Peter says if she were a beginner, she would divide her daily tasks into categories "and think about how [AI tools] can help me in those different areas". Planning and preparation tasks are often good candidates.
She suggests people try out different tools to see what works best for them.
"I want to encourage [people] to experiment in very simple, straightforward ways." For example, you could start by asking a system to proofread some text.
How do you use AI tools to make your everyday life easier? Email everyday@abc.net.au
Dr Sweaney doesn't advise paying for an AI tool. Most large language models have free versions and most people "won't be able to tell the difference".
"It's just about finding one that you enjoy using and then learning to use that well."
When not to use AI tools
Dr Sweaney says some people make the mistake of directing these tools as if they are using a search engine.
"It's much more like having an employee or an intern," she says. You're likely to get better results if you show the tool an example, and describe what you do and don't like.
Zena Assaad says there are risks involved when using AI programs and tools.
(
Supplied: Zena Assaad
)
Dr Peter says these tools don't excel at maths and recommends you use a calculator instead.
"Don't use it as an accuracy machine," she also warns. These tools are better at summarising or critiquing content you offer up, she says.
Zena Assaad, is a senior lecturer at the Australian National University's School of engineering on Ngunnawal Country, in Canberra, whose research interests include the safety of AI systems.
She encourages caution when using these tools, especially in work settings or when sensitive information is involved.
Dr Assaad says while these tools and systems can be very helpful, a lot of people are using them when they shouldn't be.
"I do think that we're seeing a loss of critical thinking skills by using these tools,: she says.
"It's your conscious choice whether or not you use it, and how you use it."
What about the information I input?
Dr Assaad says that when we engage with these systems, our personal information is being used to improve them, and these systems can then be used in ways we might not be comfortable with, such as in
You can usually opt out of your data being used to train the AI model, Dr Assad says, but it is often "hard to find" out how to do so, with many users "opting in" by default.
Dr Peter encourages people to consider and be "very mindful" of what you're submitting, particularly if any information is confidential, or not your own work or data.
Dr Sweaney says: "If you want to be really safe, turn data sharing off, and if you wouldn't put it on a public forum maybe think twice about whether you want to use AI."
Other ethical considerations
Dr Peter says there are myriad of ethical considerations that come with these tools and systems.
While you may use it to check spelling or for feedback, "you don't want to pass off AI work as your own".
Also, if you're using these tools to recreate work in the style of an author or artist, they are "not being renumerated", despite some of these systems
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

AU Financial Review
3 days ago
- AU Financial Review
ChatGPT will soon shop online, make PowerPoints on your behalf
OpenAI is rolling out new options for chatbot ChatGPT to carry out various increasingly complicated tasks on a user's behalf, part of its push to bring so-called AI agents to the mainstream. ChatGPT agent, set to be unveiled during a livestreamed event on Thursday (Friday AEST), is designed to streamline personal and professional projects, such as planning a meal and ordering ingredients for it online, or creating a slideshow for a business meeting. Bloomberg

Sydney Morning Herald
3 days ago
- Sydney Morning Herald
‘MechaHitler': Why Elon Musk's chatbot is at the centre of an Australian legal dispute
Australia's online safety watchdog is back in court this week, battling Elon Musk's X over issues of AI, free speech and who is ultimately responsible for detecting and removing violent online content. What is Grok AI, and why has it been controversial? Elon Musk's AI chatbot, dubbed Grok, is embedded in X (formerly Twitter) and has made headlines for numerous controversies – as well as for winning a $US200 million ($300 million) contract with the Pentagon. It's arguably more capable at present than ChatGPT and Gemini, but has proven much more unpredictable too. Last week, Grok declared itself a super-Nazi, referring to itself as 'MechaHitler', and made racist, sexist and antisemitic posts that its parent company, xAI, later deleted. Musk has said he wants Grok to 'not shy away from making claims which are politically incorrect', and this seems to be the result. Then, just days later, xAI launched a girlfriend chatbot that is available to 12-year-olds, despite being programmed to engage in sexual conversation. Loading The sexualised AI chatbot, named Ani, communicates with emojis and flirtatious messages and can appear dressed in lingerie. It's designed to act as if it is 'crazy in love' and 'extremely jealous', according to programming instructions posted on social media. Despite all this, xAI announced it had won a contract worth up to $US200 million to develop artificial intelligence tools for the US Department of Defence. If the chaos proves anything, it's that AI chatbots such as Grok are moving incredibly quickly, and regulators and governments are racing to catch up.

Sydney Morning Herald
4 days ago
- Sydney Morning Herald
Why are X and the eSafety Commissioner back in court?
Australia's online safety watchdog is back in court this week, battling Elon Musk's X over issues of AI, free speech and who is ultimately responsible for detecting and removing violent online content. What is Grok AI, and why has it been controversial? Elon Musk's AI chatbot, dubbed Grok, is embedded in X (formerly Twitter) and has made headlines for numerous controversies – as well as for winning a $US200 million ($300 million) contract with the Pentagon. It's arguably more capable at present than ChatGPT and Gemini, but has proven much more unpredictable too. Last week, Grok declared itself a super-Nazi, referring to itself as 'MechaHitler', and made racist, sexist and antisemitic posts that its parent company, xAI, later deleted. Musk has said he wants Grok to 'not shy away from making claims which are politically incorrect', and this seems to be the result. Then, just days later, xAI launched a girlfriend chatbot that is available to 12-year-olds, despite being programmed to engage in sexual conversation. Loading The sexualised AI chatbot, named Ani, communicates with emojis and flirtatious messages and can appear dressed in lingerie. It's designed to act as if it is 'crazy in love' and 'extremely jealous', according to programming instructions posted on social media. Despite all this, xAI announced it had won a contract worth up to $US200 million to develop artificial intelligence tools for the US Department of Defence. If the chaos proves anything, it's that AI chatbots such as Grok are moving incredibly quickly, and regulators and governments are racing to catch up.