
In the 'Golden Age of tech stocks, how do we use tech itself to assess risk and evaluate the markets?
In the current Golden Age of the tech sector, it is the emergent AI and analytics tools being created by top-performing companies that are proving some of the best virtual assistants for evaluating stocks in the tumultuous 2020s, generally by using in combination with traditional analytical techniques. Here's how:
1. Going beyond the basics. We have access to enormous amounts of research and data about every tradable stock, but traditional statistics, like revenue or P/E ratio, don't always tell the whole story with tech companies, especially those rapidly reinvesting for growth. Rather, one can look for: Price-to-sales (P/S) ratio: Especially useful for high-growth, pre-profit tech firms.
Free cash flow (FCF) growth: Indicates whether a company is capable of self-funding continued innovation.
R&D expense growth: Is the business consistently investing in future products and features?
Scale and market cap: Is the company large enough to weather market challenges?
SG&A (selling, general, and admin expenses) to revenue: Offers clues about efficiency in scaling operations.
2. Using technical analysis for trends. The rise of quantitative trading and algorithmic strategies means technical analysis is an important supplementary lens for active traders, though not a substitute for deep research. Traders are looking at markers such as: Volatility metrics: Identifying periods where momentum or reversals are likely.
Advanced charting: Using visual tools to spot levels of investor support or resistance.
Changes in options implied volatility and ratios of puts to calls in analysis of both the indexes and individual stocks.
3. Leveraging quantitative and AI tools. The next generation of evaluation involves AI and big data. These tools filter vast amounts of information from financial reports, market sentiment, news, web analytics. Some of my preferred research platforms and AI-driven tools include: Tiger Trade App's AI-powered chatbot TigerAI. Its features allow investors to research stocks, summarise key insights from earnings calls and releases, and extract pertinent company news and sentiment analysis based on the nature of the questions asked, all within seconds. TigerAI can be accessed through the Tiger Trade app, so everything is in one place.
Perplexity: This AI-powered research co-pilot synthesises web results and provides live monitoring, trend analysis, Q&A to feed back to users.
ChatGPT: The biggest name brand in LLMs to date, this is conversational AI for brainstorming and quick synthesis, and a good tool to test investment ideas and pull data summaries.
AlphaSense: Offers AI search for business/financial filings and news; users can deep dive for company and sector insights.
Google Gemini: This is multimodal AI (text and images) for competitive research; users can scan public information fast.
4. Developing valuation frameworks. Valuing tech stocks is both an art and a science. Of course, getting it right or wrong can make a big difference in ROI terms for traders and clients. Key techniques one can use: Discounted cash flow (DCF): Projects future value but is highly sensitive to assumptions.
Relative valuation: Compares companies' multiples within the sector.
Premium for growth: Sometimes justified if a company is truly dominant or highly innovative.
5. Making qualitative assessments. Without context, numbers can be misleading – and in an age of massive data volume, investors need to figure out which context is actually relevant. One can evaluate: Leadership quality: Track record, vision, and ability to execute.
Innovation pipeline: New products/services and IP protection.
Industry ecosystem position: Is the business a vital cog in a rising sector like AI, cloud computing, or cybersecurity?
ESG practices: Environmental, social, and governance disclosures, especially around climate responsibility, are highly relevant. For companies involved in AI, the conversation is becoming increasingly heated around the vast energy consumption of data centres.
6. Finding practical uses for AI in research. AI can change the scope of intense periods such as earnings season. It can be used in reporting and analysis in a few ways: Hourly news alerts: Using Perplexity or AlphaSense for customisable updates on specific tech companies.
Rapid data summarisation: With ChatGPT, one can parse lengthy earnings calls or filings quickly.
Scenario analysis: running "what if" scenarios via AI, such as how a new product might reshape a market, the expected effects of tariffs on sector X or Y, or what headwinds a new regulation could create.
Monitoring social trends: AI tools aggregate social media sentiment and web traffic, offering another layer of insight into a company's traction.
Idea validation: When considering a trend or hypothesis, cross-examine it using multiple AI platforms to find the weak points.
7. Remembering the risks. AI can give the impression that there is a final right answer to everything, but any tool can only digest the data it is designed to process. And like any tool, it is only as good as the person using it. Given AI's complexity and known pitfalls (like "hallucinations"), the risk of relying on its output is for the user to bear – it should not be taken as nor is a substitute for professional advice. Only experienced users of AI should use it for financial analysis. It is well known that AI can be, and often is, wrong in its analysis and every finding of AI needs to be verified and double checked.
No research method guarantees anything, and the risks include: Extreme volatility: Of course, tech stocks can swing wildly, and AI can't tell you for sure when or by how much. AI can be a predictor, but it is not a perfect one.
Disruption risk: Share market leaders today can lag tomorrow if innovation slows.
Overvaluation: High hopes can lead to painful corrections. These can be sudden or extreme.
Regulatory changes: New rules on data or antitrust can shift the landscape overnight.
Behavioural bias: Even seasoned investors can be swayed by hype or groupthink.
There are investors who think the current Golden Age of tech is another bubble and the only question is when it will burst, not if.
Sources: https://www.reuters.com/ business/autos-transportation/ us-stock-market-concentration- risks-come-fore-megacaps- report-earnings-2025-07-23/
Disclaimer:
This article is presented by Tiger Fintech (NZ) Limited and is for information only. It does not constitute financial advice. Investing involves risk. You should always seek professional financial advice before making any investment decisions.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

1News
2 days ago
- 1News
Wall Street falls after employers slash hiring and tariffs roll out
The US stock market had its worst day since May after the government reported a sharp slowdown in hiring and President Donald Trump imposed sweeping tariffs on imports from a number of US trading partners. The S&P 500 fell 1.6%, its biggest decline since May 21 and its fourth straight loss. The index also posted a 2.4% loss for the week, marking a sharp shift from last week's record-setting streak of gains. The Dow Jones Industrial Average fell 1.2%, while the Nasdaq composite fell 2.2%. Worries on Wall Street about a weakening economy were heavily reinforced by the latest report on job growth in the US, employers added just 73,000 jobs in July. That is sharply lower than economists expected. The Labor Department also reported that revisions shaved a stunning 258,000 jobs off May and June payrolls. ADVERTISEMENT Markets also reacted to the latest tariff news. President Donald Trump announced tariff rates on dozens of countries and pushed back the scheduled effective date to August 7, adding more uncertainty to the global trade picture. "The market has been felled by a one-two punch of additional tariffs, as well as the weaker-than-expected employment data — not only for this month, but for the downward revisions to the prior months," said Sam Stovall, chief investment strategist at CFRA. Trump's decision to order the immediate firing of the head of the government agency that produces the monthly jobs figures will only fuel the market's uncertainty, Stovall added. The surprisingly weak hiring numbers led investors to step up their expectations for an interest rate cut in September. The market's odds of a quarter-point cut by the Federal Reserve rose to around 87% from just under 40% a day earlier, according to data from CME FedWatch. The question now: Will the Fed's policymakers consider a half-point cut next month, or even a quarter-point cut sometime before their next committee meeting, Stovall said. The yield on the 10-year Treasury fell to 4.21% from 4.39% just before the hiring report was released. That's a big move for the bond market. The yield on the two-year Treasury, which more closely tracks expectations for Fed actions, plunged to 3.68% from 3.94% just prior to the report's release. The Fed has held rates steady since December. A cut in rates would give the job market and overall economy a boost, but it could also risk fueling inflation, which is hovering stubbornly above the central bank's 2% target. ADVERTISEMENT An update on Thursday (local time) for the Fed's preferred measure of inflation showed that prices ticked higher in June, rising to 2.6% from 2.4% in May. The Fed has remained cautious about cutting interest rates because of worries that tariffs will add more fuel to inflation and weigh down economic growth. The central bank, though, also counts "maximum employment" as one of its two mandates along with keeping prices stable. Issues with either of those goals could prompt a shift in policy. The Fed held rates steady again at its most recent meeting this week. Fed Chair Jerome Powell has been pressured by Trump to cut the benchmark rate, though that decision isn't his to make alone, but belongs to the 12 members of the Federal Open Market Committee. "What had looked like a Teflon labour market showed some scratches this morning, as tariffs continue to work their way through the economy," said Ellen Zentner, chief economic strategist for Morgan Stanley Wealth Management. "A Fed that still appeared hesitant to lower rates may see a clearer path to a September cut, especially if data over the next month confirms the trend." Businesses, investors and the Fed are all operating under a cloud of uncertainty from Trump's tariff policy. The latest moves give 66 countries, the European Union, Taiwan and the Falkland Islands another seven days, instead of taking effect on Friday, as Trump stated earlier. Companies have been warning investors that the policy, with some tariffs already in effect while others change or get extended, has made it difficult to make forecasts. Walmart, Procter & Gamble and many others have warned about import taxes raising costs, eating into profits and raising prices for consumers. ADVERTISEMENT Internet retail giant Amazon fell 8.3%, despite reporting encouraging profit and sales for its most recent quarter. Technology behemoth Apple fell 2.5% after also beating Wall Street's profit and revenue forecasts. Both companies face tougher operating conditions because of tariffs, with Apple forecasting a USD$1.1 billion (NZ$1.8 billion) hit from the fees in the current quarter. Exxon Mobil fell 1.8% after reporting that profit dropped to the lowest level in four years and sales fell as oil prices slumped as OPEC+ ramped up production. All told, the S&P 500 fell 101.38 points to 6,238.01. The Dow dropped 542.40 points to 43,588.58, and the Nasdaq gave up 472.32 points to finish at 20,650.13. Stocks fell across the world. Germany's DAX fell 2.7% and France's CAC 40 fell 2.9%. South Korea's Kospi tumbled 3.9%


Techday NZ
2 days ago
- Techday NZ
Sensitive data exposure rises with employee use of GenAI tools
Harmonic Security has released its quarterly analysis finding that a significant proportion of data shared with Generative AI (GenAI) tools and AI-enabled SaaS applications by employees contains sensitive information. The analysis was conducted on a dataset comprising 1 million prompts and 20,000 files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June. According to the findings, 22% of files (total 4,400) and 4.37% of prompts (total 43,700) included sensitive data. The categories of sensitive data encompassed source code, access credentials, proprietary algorithms, merger and acquisition (M&A) documents, customer or employee records, and internal financial information. Use of new GenAI tools The data highlights that in the second quarter alone, organisations on average saw employees begin using 23 previously unreported GenAI tools. This expanding variety of tools increases the administrative load on security teams, who are required to vet each tool to ensure it meets security standards. A notable proportion of AI tool use occurs through personal accounts, which may be unsanctioned or lack sufficient safeguards. Almost half (47.42%) of sensitive uploads to Perplexity were made via standard, non-enterprise accounts. The numbers were lower for other platforms, with 26.3% of sensitive data entering ChatGPT through personal accounts, and just 15% for Google Gemini. Data exposure by platform Analysis of sensitive prompts identified ChatGPT as the most common origin point in Q2, accounting for 72.6%, followed by Microsoft Copilot with 13.7%, Google Gemini at 5.0%, Claude at 2.5%, Poe at 2.1%, and Perplexity at 1.8%. Code leakage represented the most prevalent form of sensitive data exposure, particularly within ChatGPT, Claude, DeepSeek, and Baidu Chat. File uploads and risks The report found that, on average, organisations uploaded 1.32GB of files in the second quarter, with PDFs making up approximately half of all uploads. Of these files, 21.86% contained sensitive data. The concentration of sensitive information was higher in files compared to prompts. For example, files accounted for 79.7% of all stored credit card exposure incidents, 75.3% of customer profile leaks, and 68.8% of employee personally identifiable information (PII) incidents. Files accounted for 52.6% of exposure volume related to financial projections. Less visible sources of risk GenAI risk does not only arise from well-known chatbots. Increasingly, regular SaaS tools that integrate large language models (LLMs) - often without clear labelling as GenAI - are becoming sources of risk as they access and process sensitive information. Canva was reportedly used for documents containing legal strategy, M&A planning, and client data. Replit and were involved with proprietary code and access keys, while Grammarly and Quillbot edited contracts, client emails, and internal legal content. International exposure Use of Chinese GenAI applications was cited as a concern. The study found that 7.95% of employees in the average enterprise engaged with a Chinese GenAI tool, leading to 535 distinct sensitive exposure incidents. Within these, 32.8% were related to source code, access credentials, or proprietary algorithms, 18.2% involved M&A documents and investment models, 17.8% exposed customer or employee PII, and 14.4% contained internal financial data. Preventative measures "The good news for Harmonic Security customers is that this sensitive customer data, personally identifiable information (PII), and proprietary file contents never actually left any customer tenant, it was prevented from doing so. But had organizations not had browser based protection in place, sensitive information could have ended up training a model, or worse, in the hands of a foreign state. AI is now embedded in the very tools employees rely on every day and in many cases, employees have little knowledge they are exposing business data." Harmonic Security Chief Executive Officer and Co-founder Alastair Paterson made this statement, referencing the protections offered to their customers and the wider risks posed by the pervasive nature of embedded AI within workplace tools. Harmonic Security advises enterprises to seek visibility into all tool usage – including tools available on free tiers and those with embedded AI – to monitor the types of data being entered into GenAI systems and to enforce context-aware controls at the data level. The recent analysis utilised the Harmonic Security Browser Extension, which records usage across SaaS and GenAI platforms and sanitises the information for aggregate study. Only anonymised and aggregated data from customer environments was used in the analysis.


Scoop
3 days ago
- Scoop
Statement On AI In Universities From Aotearoa Communication & Media Scholars Network
We speak as a network of Aotearoa academics working in the inter-disciplines of Communication and Media Studies across our universities. Among us we have shared expertise in the political, social and economic impacts of commercially distributed and circulated generative artificial intelligence ('AI') in our university workplaces. While there is a tendency in our universities to be resigned to AI as an unstoppable and unquestionable technological force, our aim is to level the playing field to promote open critical and democratic debate. With this in mind, we make the following points: For universities… · AI is not an inevitable technological development which must be incorporated into higher education; rather it is the result of particular techno-capitalist ventures, a context which needs to be recognised and considered; · AI, as a corporate product of private companies such as OpenAI, Google, etc., encroaches on the public role of the university and its role as critic and conscience, and marginalises voices which might critique business interests; For researchers… · AI impedes rather than supports productive intellectual work because it erodes important critical thinking skills; instead, it devolves human scholarly work and critical engagement with ideas–elements vital to our cultural and social life–to software that produces 'ready-made', formulaic and backward looking 'results' that do not advance knowledge; · AI promotes an unethical, reckless approach to research which can promote 'hallucinations' and over valorise disruption for its own sake rather than support quality research; · AI normalises industrial scale theft of intellectual property as our written work is fed into AI datasets largely without citation or compensation; · AI limits the productivity of academic staff by requiring them to invent new forms of assessment which subvert AI, police students and their use of AI, or assess lengthy 'chat logs', rather than engage with students in activities and assessments that require deep, critical thinking and sharing, questioning and articulating ideas with peers; For students… · AI tools create anxiety for students; some are falsely-accused of using generative-AI when they haven't, or are very stressed that it could happen to them; · AI tools such as ChatGPT are contributing to mental-health crises and delusions in various ways; promoting the use of generative-AI in academic contexts is thus unethical, particularly when considering students and the role of universities in pastoral care; · AI thus undermines the fundamental relationships between teacher and student, academics and administration, and the university and the community by fostering an environment of distrust; For Aotearoa New Zealand… · AI clashes with Te Tiriti obligations around data sovereignty and threatens the possibility of data colonialism regarding te reo itself; · AI is devastating for the environment in terms of energy and water use and the extraction of natural resources needed for the processors that AI requires. Signed by: Rosemary Overell, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Olivier Jutel, Lecturer, Media, Film & Communications Programme, The University of Otago Emma Tennent, Senior Lecturer, Media & Communication, Te Herenga Waka Victoria University of Wellington Rachel Billington, Lecturer, Media, Film & Communications Programme, The University of Otago Brett Nicholls, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Yuki Watanabe, Lecturer, Media, Film & Communications Programme, The University of Otago Sy Taffel, Senior Lecturer, Media Studies Programme, Massey University Leon Salter, Senior Lecturer, Communications Programme, University of Auckland Angela Feekery, Senior Lecturer, Communications Programme, Massey University Ian Huffer, Senior Lecturer, Media Studies Programme, Massey University Pansy Duncan, Senior Lecturer, Media Studies Programme, Massey University Kevin Veale, Senior Lecturer, Media Studies Programme, Massey University Peter A. Thompson, Associate Professor, Media & Communication Programme, Te Herenga Waka/Victoria University of Wellington Nicholas Holm, Associate Professor, Media Studies Programme, Massey University Sean Phelan, Associate Professor, Massey University Yuan Gong, Senior Lecturer, Media Studies Programme, Massey University Chris McMillan, Teaching Fellow, Sociology Programme, University of Auckland Cherie Lacey, Researcher, Centre for Addiction Research, University of Auckland Thierry Jutel, Associate Professor, Film, Te Herenga Waka, Victoria University of Wellington Max Soar, Teaching Fellow, Political Communication, Te Herenga Waka Victoria University of Wellington Lewis Rarm, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Tim Groves, Senior Lecturer, Film. Te Herenga Waka, Victoria University of Wellington Valerie Cooper, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Wayne Hope, Professor, Faculty of Design & Creative Technologies, Auckland University of Technology Greg Treadwell, senior lecturer in journalism, School of Communication Studies, Auckland University of Technology Christina Vogels, Senior Lecturer, Critical Media Studies, School of Communication Studies, Auckland University of Technology