
MCP decoded: How Anthropic's protocol is enabling smoother AI interactions
Model Context Protocol
, developed by the Claude-maker
Anthropic
and first introduced in November, 2024. While it did not make a splash last year, over the last few months,
MCP
has been adopted by developers, platforms, and companies. Swathi Moorthy decodes what MCP is, why it is important and the hype behind it.
ETtech
Discover the stories of your interest
Blockchain
5 Stories
Cyber-safety
7 Stories
Fintech
9 Stories
E-comm
9 Stories
ML
8 Stories
Edtech
6 Stories

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
3 days ago
- Economic Times
Tech's diversity crisis is baking bias into AI systems
ETtech As an Afro-Latina woman with degrees in computer and electrical engineering, Maya De Los Santos hopes to buck a trend by forging a career in AI, a field dominated by white men. AI needs her, experts and observers say. Built-in viewpoints and bias, unintentionally imbued by its creators, can make the fast-growing digital tool risky as it is used to make significant decisions in areas such as hiring processes, health care, finance and law enforcement, they warn. "I'm interested in a career in AI because I want to ensure that marginalized communities are protected from and informed on the dangers and risks of AI and also understand how they can benefit from it," said De Los Santos, a first-generation U.S. college student. "This unfairness and prejudice that exists in society is being replicated in the AI brought into very high stakes scenarios and environment, and it's being trusted, without more critical thinking." Women represent 26% of the AI workforce, according to a UNESCO report, and men hold 80% of tenured faculty positions at university AI departments globally. Blacks and Hispanics also are underrepresented in the AI workforce, a 2022 census data analysis by Georgetown University showed. Among AI technical occupations, Hispanics held about 9% of jobs, compared with holding more than 18% of U.S. jobs overall, it said. Black workers held about 8% of the technical AI jobs, compared with holding nearly 12% of U.S. jobs overall, it said. AI bias De Los Santos will soon begin a PhD program in human computer interaction at Brown University in Providence, Rhode Island. She said she wants to learn not only how to educate marginalized communities on AI technology but to understand privacy issues and AI bias, also called algorithm or machine learning bias, that produces results that reflect and perpetuate societal biases. Bias has unintentionally seeped into some AI systems as software engineers, for example, who are creating problem-solving techniques integrate their own perspectives and often-limited data sets. scrapped an AI recruiting tool when it found it was selecting resumes favoring men over women. The system had been trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of a preponderance of men across the industry, and the system in effect taught itself that male candidates were preferable. "When people from a broader range of life experiences, identities and backgrounds help shape AI, they're more likely to identify different needs, ask different questions and apply AI in new ways," said Tess Posner, founding CEO of AI4ALL, a non-profit working to develop an inclusive pipeline of AI professionals. "Inclusion makes the solutions created by AI more relevant to more people," said Posner. PROMOTING DIVERSITY AI4ALL counts De Los Santos as one of the 7,500 students it has helped navigate the barriers to getting a job in AI since 2015. By targeting historically underrepresented groups, the non-profit aims to diversify the AI workforce. AI engineer jobs are one of the fastest growing positions globally and the fastest growing overall in the U.S. and the United Kingdom, according to LinkedIn. Posner said promoting diversity means starting early in education by expanding access to computer science classes for children. About 60% of public high schools offer such classes with Blacks, Hispanics and Native Americans less likely to have access. Ensuring that students from underrepresented groups know about AI as a potential career, creating internships and aligning them with mentors is critical, she said, Efforts to make AI more representative of American society are colliding with President Donald Trump's backlash against Diversity, Equity & Inclusion (DEI) programs in the federal government, higher education and corporate levels. DEI offices and programs in the U.S. government have been terminated and federal contractors banned from using affirmative action in hiring. Companies from Goldman Sachs to PepsiCo have halted or cut back diversity programs. Safiya Noble, a professor at the University of California Los Angeles and founder of the Center on Resilience & Digital Justice, said she worries the government's attack on DEI will undermine efforts to create opportunities in AI for marginalized groups. "One of the ways to repress any type of progress on civil rights is to make the allegation that tech and social media companies have been too available to the messages of civil rights and human rights," said Noble. "You see the evidence with their backlash against movements like Blacks Lives Matter and allegations of anti-conservative bias," she said. Globally, from 2021 to 2024, UNESCO says the number of women working in AI increased by just 4 percent. While progress may be slow, Posner said she is optimistic. "There's been a lot of commitment to these values of inclusion," she said. "I don't think that's changed, even if as a society, we are wrestling with what inclusion really means and how to do that across the board." Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Markets need to see more than profits from Oyo Can Grasim's anti-competition charge against Asian Paints stand amid intense war Engine fuel switches or something else? One month on, still no word on what crashed AI 171 Delhivery survived the Meesho curveball. Can it keep on delivering profits? Stock Radar: Page Industries breaks out from Cup & Handle formation; stock hits fresh 52-week high For risk-takers with ability to stay invested for the long term: 5 small-caps from different sectors with upside potential of 5 to 32% Multibagger or IBC - Part 14: This auto ancillary with double-digit net margins is now getting EV-focused These mid-cap stocks with 'Strong Buy' & 'Buy' recos can rally over 25%, according to analysts


Mint
3 days ago
- Mint
EU Rolls Out AI Code With Broad Copyright, Transparency Rules
(Bloomberg) -- The European Union published a code of practice to help companies follow its landmark AI Act that includes copyright protections for creators and transparency requirements for advanced models. The code will require developers to provide up-to-date documentation describing their AI's features to regulators and third parties looking to integrate it in their own products, the European Commission said Thursday. Companies also will be banned from training AI on pirated materials and must respect requests from writers and artists to keep copyrighted work out of datasets. If AI produces material that infringes copyright rules, the code of practice will require companies to have a process in place to address it. The code of practice is voluntary and aims to help companies establish internal mechanisms for implementing the AI law. The regulation, which is going into force on a staggered timetable, establishes curbs on AI in general purpose and high-risk fields and restrict some applications. Rules impacting 'general purpose AI' like OpenAI's ChatGPT or Anthropic's Claude will apply starting next month. Breaching the AI Act can carry a fine of as much as 7% of a company's annual sales or 3% for the companies developing advanced AI models. The code, which still needs final sign off from the commission and EU member states, has been controversial and triggered a backlash from some technology companies, including Meta Platforms Inc. and Alphabet Inc. They complained that earlier drafts went beyond the bounds of the AI Act and created a new set of onerous rules. This month, European companies including ASML Holding NV, Airbus SE and Mistral AI also asked the commission to suspend the AI Act's implementation for two years in an open letter this month calling for a more 'innovation-friendly regulatory approach.' The commission, which missed an initial May deadline to publish the code of practice, has so far declined to postpone the implementation. The code was drafted under the guidance of officials from the commission, the EU's executive branch, which organized working groups composed of representatives from AI labs, technology companies, academia and digital rights organizations. The commission will only start directly overseeing the AI Act's application in August 2026. Enforcement will be in the hands of national courts, which may have less specific technical expertise, until then. Signing the code of practice will give companies 'increased legal certainty,' the commission has said. More stories like this are available on


Time of India
3 days ago
- Time of India
Zerodha CEO Nithin Kamath on future of investing and trading in a world of AI: ‘Tools like ChatGPT and Claude make it…'
Zerodha founder and CEO Nithin Kamath recently shared a post on microblogging platform X (formerly Twitter), where he discussed the future of investing and trading in the age of artificial intelligence (AI). In the post, Kamath wrote: 'Tools like ChatGPT and Claude make it clear this shift isn't an "if" but a "when." It might take a few years or a decade, but it's inevitable.' He continued 'Human advisors will still have a role, mainly to help people stick to what these tools recommend'. Kamath further said that brokers (like Zerodha) will be 'a set of 'pipes' connecting users to exchanges and back-office systems'. 'The interfaces will mostly be built by users themselves,' he added. 'In a future where everything is automated, trust and infrastructure will be our only real moats,' Kamath concluded. Here's the full text of Nithin Kamath's Twitter post About MCP and the future of investing and trading in a world of AI: I keep asking K (most likely Kailash Nadh , the CTO (Chief Technology Officer) of Zerodha.) about what all this progress in AI means for our business. It feels to me like we're at the very beginning of a massive shift in how financial services will work. At some point, I think all of it, from investing and trading to banking and payments, will happen through custom AI-powered apps built by users themselves using natural language instructions. In that world, what's the role of a broker? Likely, we'll just be a set of "pipes" connecting users to exchanges and back-office systems. The interfaces will mostly be built by users themselves. The only way to stay relevant is to ensure we're the best pipe: fast, efficient, reliable, and invisible when it matters. That's why, over the years, K and the tech team have been obsessively making our systems faster, more scalable, and future-ready. Even if these improvements don't immediately change a customer's trading or reporting experience, we've chosen to fix every possible bottleneck today, not later. Tools like ChatGPT and Claude make it clear this shift isn't an "if" but a "when." It might take a few years or a decade, but it's inevitable. Human advisors will still have a role, mainly to help people stick to what these tools recommend. As for how things will evolve, the answer is grey. No one knows. Our approach: stay curious, keep track of the trends, and act where it makes sense. For example, we've intentionally held back on enabling AI-driven order placement. In a future where everything is automated, trust and infrastructure will be our only real moats. What Is Artificial Intelligence? Explained Simply With Real-Life Examples AI Masterclass for Students. Upskill Young Ones Today!– Join Now