logo
#

Latest news with #JohnMcCarthy

Is WHIAX a Strong Bond Fund Right Now?
Is WHIAX a Strong Bond Fund Right Now?

Yahoo

time2 days ago

  • Business
  • Yahoo

Is WHIAX a Strong Bond Fund Right Now?

Any investors hoping to find a High Yield - Bonds fund might consider looking past Macquarie High Income A (WHIAX). WHIAX has a Zacks Mutual Fund Rank of 5 (Strong Sell), which is based on various forecasting factors like size, cost, and past performance. WHIAX is part of the High Yield - Bonds section, which is a segment that boasts many possible options. Often referred to as " junk " bonds, High Yield - Bonds funds sit below investment grade, meaning they are at a high default risk compared to their investment grade peers. However, one advantage to junk bonds is that they generally pay out higher yields while posing similar interest rate risks to their investment grade counterparts. Macquarie is responsible for WHIAX, and the company is based out of Philadelphia, PA. Macquarie High Income A debuted in September of 2003. Since then, WHIAX has accumulated assets of about $1.17 billion, according to the most recently available information. The fund's current manager, John McCarthy, has been in charge of the fund since November of 2021. Investors naturally seek funds with strong performance. WHIAX has a 5-year annualized total return of 4.79% and is in the bottom third among its category peers. Investors who prefer analyzing shorter time frames should look at its 3-year annualized total return of 4.66%, which places it in the bottom third during this time-frame. It is important to note that the product's returns may not reflect all its expenses. Any fees not reflected would lower the returns. Total returns do not reflect the fund's [%] sale charge. If sales charges were included, total returns would have been lower. When looking at a fund's performance, it is also important to note the standard deviation of the returns. The lower the standard deviation, the less volatility the fund experiences. The standard deviation of WHIAX over the past three years is 8.65% compared to the category average of 12.64%. The fund's standard deviation over the past 5 years is 8.02% compared to the category average of 12.33%. This makes the fund less volatile than its peers over the past half-decade. Modified duration is a measure of a given bond's interest rate sensitivity, and is a metric that's a good way to judge how fixed income securities will respond in a shifting rate environment. If you believe interest rates will rise, this is an important factor to look at. WHIAX has a modified duration of 3.02, which suggests that the fund will decline 3.02% for every hundred-basis-point increase in interest rates. Income is often a big reason for purchasing a fixed income security, so it is important to consider the fund's average coupon. This metric calculates the fund's average payout in a given year. For example, this fund's average coupon of 7.41% means that a $10,000 investment should result in a yearly payout of $741. While a higher coupon is good for when you want a strong level of current income, it could present a reinvestment risk if rates are lower in the future when compared to the initial purchase date of the bond. Investors also need to consider risk relative to broad benchmarks, as income is only one part of the bond picture. WHIAX carries a beta of 0.03, meaning that the fund is less volatile than a broad market index of fixed income securities. With this in mind, it has a positive alpha of 5.05, which measures performance on a risk-adjusted basis. Costs are increasingly important for mutual fund investing, and particularly as competition heats up in this market. And all things being equal, a lower cost product will outperform its otherwise identical counterpart, so taking a closer look at these metrics is key for investors. In terms of fees, WHIAX is a load fund. It has an expense ratio of 0.89% compared to the category average of 0.93%. Looking at the fund from a cost perspective, WHIAX is actually cheaper than its peers. While the minimum initial investment for the product is $1,000, investors should also note that each subsequent investment needs to be at least $100. Fees charged by investment advisors have not been taken into considiration. Returns would be less if those were included. Overall, even with its comparatively weak performance, average downside risk, and lower fees, Macquarie High Income A ( WHIAX ) has a low Zacks Mutual Fund rank, and therefore looks a somewhat weak choice for investors right now. Don't stop here for your research on High Yield - Bonds funds. We also have plenty more on our site in order to help you find the best possible fund for your portfolio. Make sure to check out for more information about the world of funds, and feel free to compare WHIAX to its peers as well for additional information. Zacks provides a full suite of tools to help you analyze your portfolio - both funds and stocks - in the most efficient way possible. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Get Your Free (WHIAX): Fund Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Sign in to access your portfolio

AI and its future: beyond the data-driven era
AI and its future: beyond the data-driven era

Hans India

time22-06-2025

  • Business
  • Hans India

AI and its future: beyond the data-driven era

Artificial intelligence is the science of making machines do things that would require intelligence if done by humans — John McCarthy, who coined the term 'artificial intelligence' and is considered father of AI, said in 1955 Artificial Intelligence is the buzzword that's resonating across boardrooms, classrooms, and coffee shops these days. It is everywhere. From chatbots handling customer service to algorithms curating social media feeds, AI has become the in-thing of our time. Yet despite the widespread adoption and breathless headlines, we're still in the earliest stages of what AI can become. The current reality: data-driven intelligence Today's AI systems, impressive as they may seem, operate on a fundamental principle: processing vast amounts of data to recognize patterns and generate responses. These Large Language Models (LLMs) can write poetry, code software, and answer complex questions, but they're essentially sophisticated pattern-matching engines drawing from enormous datasets. Frankly speaking, what we're experiencing now is just the tip of the iceberg and we're still in the fetal stage of artificial intelligence evolution. However, the current data-driven approach has undeniably been disruptive. Industries from healthcare to finance have scrambled to integrate AI tools, leading to the ubiquitous presence of 'AI-powered' solutions. However, calling these systems true artificial intelligence may be premature - they lack the fundamental cognitive abilities that define genuine intelligence. The next frontier: Artificial General Intelligence The next phase in AI evolution promises something far more sophisticated: Artificial General Intelligence (AGI). Unlike current systems that excel in narrow domains, AGI will possess the ability to understand, learn, and apply intelligence across a broad range of tasks - much like human cognitive flexibility. The key differentiator lies in cognition. Where today's AI relies on statistical analysis of training data, AGI systems will develop the capacity for genuine reasoning and decision-making. This cognitive leap represents a fundamental shift from pattern recognition to actual thinking. AGI won't just process information faster or access more data - it will understand context, make inferences, and adapt to entirely new situations without requiring additional training. This represents a qualitative, not just quantitative, advancement in machine intelligence. The ultimate goal: Absolute Intelligence Beyond AGI lies an even more ambitious target: Absolute Intelligence. This final phase envisions AI systems with fully developed cognitive abilities - machines that can think, reason, and make decisions with the same depth and nuance as human consciousness, potentially surpassing human intellectual capabilities. Absolute Intelligence would mark the point where artificial systems achieve genuine understanding rather than sophisticated mimicry. These systems would possess creativity, intuition, and the ability to grapple with abstract concepts in ways that current AI cannot. Small Language Models: The Future Architecture Contrary to the current trend towards ever-larger models, the future may belong to Small Language Models (SLMs). These more efficient, specialized systems could prove more practical and powerful than their data-hungry predecessors. Small Language Models offer several advantages over massive LLMs: reduced computational requirements, faster processing, greater customization for specific tasks, and the ability to run locally rather than requiring cloud infrastructure. As AI becomes more integrated into daily life, these characteristics will prove increasingly valuable. The shift toward SLMs reflects a maturation of the field - moving from brute-force approaches that require enormous resources toward elegant, efficient solutions that deliver superior performance with less overhead. The Way Forward Rather than dwelling on dystopian scenarios, the AI revolution presents an opportunity to thoughtfully shape the next decade of technological development. The progression from today's data-driven systems through AGI to Absolute Intelligence won't happen overnight. However, the key lies in recognizing that we're not approaching an endpoint but rather embarking on a carefully planned journey. Each phase of AI development builds upon the previous one, creating opportunities to refine our approach, establish ethical frameworks, and ensure that artificial intelligence helps humans. As we stand at this inflection point, the question isn't whether AI will transform our world - it's how we'll guide that transformation. The next ten years will determine whether we harness these emerging capabilities to solve pressing global challenges, enhance human potential, and create a more prosperous future for all. The age of true artificial intelligence is still ahead of us. What we're witnessing today is merely the opening chapter of a much larger story - one that we have the power to write thoughtfully and purposefully. All said and done, the world needs a responsible AI that can enhance our quality of life in all spheres and spaces. That's the bottom line. (Krishna Kumar is a technology explorer & strategist based in Austin, Texas in the US. Rakshitha Reddy is AI developer based in Atlanta, US)

I tested Perplexity vs Google AI overview with 7 prompts — the results were shocking
I tested Perplexity vs Google AI overview with 7 prompts — the results were shocking

Tom's Guide

time21-06-2025

  • Business
  • Tom's Guide

I tested Perplexity vs Google AI overview with 7 prompts — the results were shocking

Search is undergoing a profound change. For decades, Google has dominated the web search world, with some 90% of all searches funnelled through the massive Google machine. But suddenly, with the arrival of artificial intelligence, things are starting to change, and seriously so. Not only are people increasingly using AI products like ChatGPT as their default search tool, but companies like Perplexity are also building businesses around search services. The idea is to combine the power of AI analysis with the huge amount of conventional search data available at the end of a cursor. But Google is fighting back. The company has recently released an advanced search function called AI Overviews, which aims to bridge the two disciplines and deliver the kind of informed search results the market demands. It's a new kind of search on steroids. So how do the two approaches compare in everyday use? We take a look at Google's new AI Overviews and compare the results to Perplexity AI, to see which gives a better bang for the buck. Prompt: Summarize the key contributions of John McCarthy, Geoffrey Hinton and Noam Shazeer to the development of artificial intelligence. We thought we'd start with something close to home - a look at the architects of AI from the past. First impressions are that Google delivers a competent but fairly traditional results page from this request. Its answer of 238 words covers all the basic points and gives a good overview of the points as you'd expect. Perplexity delivers over 400 words, but it's more than just the quantity that's impressive. It's the fact that the results are laid out in a much more engaging manner, with the user being encouraged to explore additional information in a variety of different ways. They can explore related data, look directly at the sources and even regenerate the results to get a different perspective. Where Google seems to do the bare minimum, Perplexity really seems to add user value. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Google 4/10 Perplexity 7/10 Prompt: Create a 3-day itinerary for a first-time visitor to Tokyo on a modest budget. The user is interested in Japanese culture and food, but wants to avoid tourist traps. This is a real kicker, a stark example of the old versus the new. Google completely fails to deliver any meaningful response, but instead retreats to a standard Google search. The answer merely features a selection of third-party websites offering tour advice. This is obviously beneficial to Google as it will no doubt allow it to earn ad revenue. Perplexity, on the other hand, delivers a glorious 1000 words of real down-to-earth itinerary. This includes gorgeous photos, maps and itemized costings, which will be more than enough for the user to get a great idea of the proposed experience. Google 2/10 Perplexity 9/10 Prompt: What is the technology behind noise cancelling headphones? Once again we can see the difference between traditional search results and new style AI analysis, although in this case the difference is not so great. Google's results are very credible with 186 words of explanation, along with a useful YouTube video. Perplexity, however, takes it to a more advanced level. The explanation is roughly the same, but the prose is much more accessible to a layperson. Instead of using the word 'inverse', for example, the app chooses to use simpler English to explain how sound is cancelled. It's a subtle but important use of 500 words to achieve a more understandable answer. Google 6/10 Perplexity 8/10 Prompt: Explain the 'double-dig' method of garden bed preparation and contrast it to no-till farming This is quite an obscure test, involving little-known agricultural techniques. But for gardeners it's a very important topic. Google's response is very workmanlike and informative, and uses 260 words to deliver a good answer to the question. The use of Reddit and the Royal Horticultural Society (RHS) are also great sources, which add authority to the answer. Unfortunately for Google, Perplexity once again matches and exceeds in response quality. The RHS and Reddit are also mentioned, as is YouTube. But two things really make this answer stand out. First, the use of a great table to explain the differences at a glance and, most importantly, a conclusion, which gives a clear indication as to why no-dig is increasingly considered the better solution. Google 7/10 Perplexity 8/10 Prompt: What are the primary compliance challenges for a US-based tech startup under the EU's AI Act? This request pushes search to the limits of topicality and obtuse legal documents. Surprisingly, Google's results are very lackluster. The search engine offers up a 57 word March 2025 'featured snippet' from an obscure third-party publication, and that's it. It shows no interest in digging deeper into the topic for the user. Yet again Perplexity tries harder. We're talking 600 words set in a beautiful bullet point format, running through the main challenges and issues surrounding compliance. Along with 9 easily accessible sources and a handful of related subject matter links. Masterful. Google 2/10 Perplexity 8/10 Prompt: What does the history and potential future of blockchain and cryptocurrency look like? This prompt clearly demonstrates why Google's AI Overview is unfortunately not really ready for prime time. The original prompt was something like 'explain cryptocurrency to a fifth grader', but when tested Google couldn't handle it and served up a lame Quora snippet. It's only when we changed the prompt to this one that AI Overview kicked into action, and delivered a reasonable result. It's obvious there's not that much AI involved in AI Overview yet. Interestingly though, this was probably Google's best result. We got 400 words of densely packed information covering the topic clearly and succinctly. Perplexity was also good, providing 600 words and a nice table. Not much to choose between the two then. Google 8/10 Perplexity 8/10 Prompt: What kind of cat is this? For the final prompt we thought we'd go with something a little more exotic. Both search platforms support image upload, so what better than to upload a friendly looking cat to get some more information? Google takes the uploaded image as a prompt to display a page full of similar images, which aligns with its original image matching search. But a re-prompt of 'what kind of cat is this' then delivered a very short four line answer which, although correct, was not super helpful. Perplexity's response was 246 words, with bullets points, covering coat pattern, fur, and the breed. Even a fun fact (calico cats are almost always female). Engaging and informative. Google 3/10 Perplexity 7/10 The king is dead, long live the king? Based on this showing, the rumors could indeed be true. The mighty Google may in fact be on the way to losing its grip on the world's search traffic. Is this the end of an era? Time will tell. However if there's one thing we've learned over the years, it's never to discount the ability of the Google empire to strike back. Uniquely in the world, the company has the compute power, the data and the legendary AI pedigree to surprise us all. Test Notes. It should be noted that we did not use any of the advanced Perplexity functions, but kept to the basic default service. Which make the results even more impressive. It's also important to recognize the fact that AI can get things wrong. Both services feature disclaimers which stress that users should not assume AI search responses are factually correct. This is an early technology finding its feet, users should take care.

The deflationary unit economics of AI
The deflationary unit economics of AI

Time of India

time17-06-2025

  • Business
  • Time of India

The deflationary unit economics of AI

One of the big discussion themes is how AI will scale up and impact us all. Is it a bane or a boon ? It's instructive to look at the unit economics. As the AI systems scale, the cost of producing additional outputs - whether it's content, decisions, insights, or services - drops sharply, often approaching zero The rapid rollout of AI tools by tech giants such as Meta, Google, TikTok and Amazon, aims to replace much of the work traditionally carried out by ad agencies. Artificial intelligence was hardly a recognized concept in 1956, when leading computer scientists gathered at Dartmouth College for a summer conference. The term had just been coined by John McCarthy in the event's funding proposal—a visionary attempt to explore how machines might one day use language, solve problems like humans, and even improve themselves. The group's bold founding belief was bold 'Any feature of human intelligence could ,in principle , be so precisely described that a machine can be made to simulate it.' That is now a reality. Or is it ? Meta, which owns Facebook and Instagram, with its 'Infinite Creative' is giving AI-powered, automated, scalable ad creation and optimization. It's part of Meta's strategy to make advertising more efficient and performance-driven by combining machine learning, personalization, and automation. Amazon Ads now offers generative AI tools that enable brands to create their own ads and dynamically allocate budgets in real time across both linear TV and streaming platforms. This has major economic and societal implications. Here's an expansion: A) Near-Zero Marginal Cost Once an AI model is trained (often at high upfront cost), it can generate results :text, images, predictions and code at almost no additional cost per unit. AI doesn't tire, forget, demand higher wages, or require physical resources to scale its output. B) Productivity Increases Without Proportional Cost AI tools allow individuals and companies to do more with less. A single employee with AI assistance can produce the output of a team, reducing the need for large headcounts and pushing prices down. C) Market Pressure to Lower Prices As more businesses adopt AI, competition forces everyone to reduce prices. Advertising , Media Buying, Design , Education, Legal services and other knowledge-based industries are vulnerable. The question remain on the value of Differentiation, Trust, Curation and authenticity of Human experience. The system is efficient. The output is infinite. But will a rolling tear on a cheek still produce poetry?

AI's Magic Cycle
AI's Magic Cycle

Forbes

time18-05-2025

  • Science
  • Forbes

AI's Magic Cycle

Linkedin: Here's some of what innovators are thinking about with AI research today Artificial Intelligence concept - 3d rendered image. When people talk about the timeline of artificial intelligence, many of them start in the 21st century. That's forgivable if you don't know a lot about the history of how this technology evolved. It's only in this new millennia that most people around the world got a glimpse of what the future holds with these powerful LLM systems and neural networks. But for people who have been paying attention and understand the history of AI, it really goes back to the 1950s. In 1956, a number of notable computer scientists and mathematicians met at Dartmouth to discuss the evolution of intelligent computation systems. And you could argue that the idea of artificial intelligence really goes back much further than that. When Charles Babbage made his analytical engine decades before, even rote computation wasn't something that machines could do. But when the mechanical became digital, and data became more portable in computation systems, we started to get those kinds of calculations and computing done in an automated way. Now there's the question of why artificial intelligence didn't come along in the 1950s, or in the 1960s, or in the 1970s. 'The term 'Artificial Intelligence' itself was introduced by John McCarthy as the main vision and ambition driving research defined moving forward,' writes Alex Mitchell at Expert Beacon. '65 years later, that pursuit remains ongoing.' What it comes down to, I think most experts would agree, is that we didn't have the hardware. In other words, you can't build human-like systems when your input/output medium is magnetic tape. But in the 1990s, the era of big data was occurring, and the cloud revolution was happening. And when those were done, we had all of the systems we needed to host LLM intelligence. Just to sort of clarify what we're talking about here, most of the LLMs that we use work on the context of next-word or next-token analysis – they're not sentient, per se, but they're using elegant and complex data sets to mimic intelligence. And to do that, they need big systems. That's why the colossal data centers are being built right now, and why they require so much energy, so much cooling, etc. At an Imagination in Action event this April, I talked to Yossi Mathias, a seasoned professional with 19 years at Google who is the head of research at Google, about research there and how it works. He talked about a cycle for a research motivation that involves publishing, vetting and applying back to impact. But he also spoke to that idea that AI really goes back father than most people think. 'It was always there,' he said, invoking the idea of the Dartmouth conference and what it represented. 'Over the years, the definition of AI has shifted and changed. Some aspects are kind of steady. Some of them are kind of evolving.' Then he characterized the work of a researcher, to compare motives for groundbreaking work. 'We're curious as scientists who are looking into research questions,' he said, 'but quite often, it's great to have the right motivation to do that, which is to really solve an important problem.' 'Healthcare, education, climate crisis,' he continued. 'These are areas where making that progress, scientific progress …actually leads into impact, that is really impacting society and the climate. So each of those I find extremely rewarding, not only in the intellectual curiosity of actually addressing them, but then taking that and applying it back to actually get into the impact that they'd like to get.' Ownership of a process, he suggested, is important, too. 'An important aspect of talking about the nature of research at Google is that we are not seeing ourselves as a place where we're looking into research results, and then throwing them off the fence for somebody else to pick up,' he said. 'The beauty is that this magic cycle is really part of what we're doing.' He talked about teams looking at things like flood prediction,where he noted to so potential for future advancements. We also briefly went over the issue of quantum computing,where Mathias suggested there's an important milestone ahead. 'We can actually reduce the quantum error, which is one of the hurdles, technological hurdles,' he said. 'So we see good progress, obviously, on our team.' One thing Mathias noted was the work of Peter Shore, whose algorithm, he suggested, demonstrated some of the capabilities that quantum research could usher in. 'My personal prediction is that as we're going to get even closer to quantum computers that work, we're going to see many more use cases that we're not even envisioning today,' he noted. Later, Mathias spoke about his notion that AI should be assistiveto humans, and not a replacement for human involvement. 'The fun part is really to come together, to brainstorm, to come up with ideas on things that we never anticipated coming upwith, and to try out various stuff,' he said. Explaining how AI can fill in certain gaps in the scientific process, he described a quick cycle by which, by the time a paper is published on a new concept, that new concept can already be in place in, say, a medical office. 'The one area that I expect actually AI to do much more (in) is really (in) helping our doctors and nurses and healthcare workers,' Mathias said. I was impressed by the scope of what people have done, at Google and elsewhere. So whether it's education or healthcare or anything else, we're likely to see quick innovation, and applications of these technologies to our lives. And that's what the magic cycle is all about.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store