logo
#

Latest news with #dataScientists

Unlock the Power of Data Extraction with Gemini CLI and MCP Servers
Unlock the Power of Data Extraction with Gemini CLI and MCP Servers

Geeky Gadgets

time01-07-2025

  • Geeky Gadgets

Unlock the Power of Data Extraction with Gemini CLI and MCP Servers

What if you could seamlessly integrate a powerful command-line tool with a server designed to handle complex data extraction workflows? Imagine automating the collection of structured data from platforms like LinkedIn or Amazon, all while maintaining precision, compliance, and efficiency. This is exactly what combining Gemini CLI with a Model Context Protocol (MCP) server offers. Whether you're a data scientist navigating intricate scraping scenarios or a business professional seeking actionable insights, this pairing unlocks a streamlined approach to managing and enhancing your data extraction processes. But as with any sophisticated system, the key lies in understanding how to configure and optimize these tools for maximum impact. In this deep dive, Prompt Engineering explores the step-by-step process of integrating Gemini CLI with an MCP server, using Bright Data as a prime example. You'll uncover how to configure essential settings like API tokens and rate limits, use advanced features such as structured queries and browser APIs, and even troubleshoot common challenges to ensure uninterrupted workflows. Along the way, we'll highlight how this integration not only simplifies data collection but also enables you to extract meaningful, actionable insights from even the most complex datasets. By the end, you'll see how these tools can transform your approach to data extraction, opening up new possibilities for efficiency and scalability. Integrating Gemini CLI with MCP Configuring Gemini CLI for MCP Servers To successfully integrate Gemini CLI with an MCP server, proper configuration is essential. The process begins with creating a ` file, which serves as the central repository for your API tokens, zones, and rate limits. This configuration ensures smooth communication between Gemini CLI and the MCP server, optimizing performance and reliability. Generate API tokens : Obtain API tokens from your MCP server account to enable secure authentication. : Obtain API tokens from your MCP server account to enable secure authentication. Set rate limits : Define rate limits to prevent overloading the server and maintain compliance with usage policies. : Define rate limits to prevent overloading the server and maintain compliance with usage policies. Define zones: Specify zones to outline the scope and focus of your data extraction activities. After completing these steps, restart Gemini CLI to apply the updated settings. This ensures the tool is fully prepared for your data extraction tasks, minimizing potential disruptions and maximizing efficiency. Maximizing Efficiency with Bright Data MCP Server Bright Data is a widely recognized MCP server, valued for its advanced web scraping capabilities and robust toolset. When integrated with Gemini CLI, it enables automated data collection from platforms such as LinkedIn, Amazon, and YouTube. Bright Data's specialized features are designed to address complex scraping scenarios, making it a powerful resource for extracting structured data. Web unlocker : Overcomes CAPTCHA challenges and other access restrictions, making sure uninterrupted data collection. : Overcomes CAPTCHA challenges and other access restrictions, making sure uninterrupted data collection. Browser APIs: Simulate user interactions, such as scrolling or clicking, to enable dynamic and comprehensive data extraction. These tools are particularly effective for gathering structured data, such as product specifications, user profiles, or video metadata. By using Bright Data's capabilities, you can ensure that your extracted data is both organized and actionable, supporting a wide range of analytical and operational needs. Guide to Integrating Gemini CLI with Model Context Protocol (MCP) Servers Watch this video on YouTube. Explore further guides and articles from our vast library that you may find relevant to your interests in Model Context Protocol (MCP). Core Features of MCP Servers MCP servers, including Bright Data, offer a variety of features designed to optimize data extraction workflows. These features provide users with the flexibility and precision needed to handle diverse data collection tasks. Structured queries : Enable precise and targeted data requests, reducing unnecessary processing and improving accuracy. : Enable precise and targeted data requests, reducing unnecessary processing and improving accuracy. URL-based inputs : Focus on specific web pages or sections to streamline data collection efforts. : Focus on specific web pages or sections to streamline data collection efforts. Error-handling tools : Address common issues such as timeouts or access restrictions, making sure reliable operations. : Address common issues such as timeouts or access restrictions, making sure reliable operations. Permission management: Maintain compliance with platform policies and legal requirements. For example, structured queries can be used to extract detailed information from LinkedIn profiles or YouTube videos, while permission management tools help ensure that your activities remain within acceptable boundaries. Overcoming Common Challenges While Gemini CLI and MCP servers are powerful tools, users may encounter challenges during setup or operation. Common issues include incorrect configuration of the ` file or difficulties disabling default tools, such as Google search, within Gemini CLI. Addressing these challenges often involves revisiting configuration files or consulting official documentation for detailed guidance. If persistent issues arise, consider running the Bright Data MCP server on a cloud desktop environment. This approach provides a stable and controlled platform for data extraction tasks, reducing the likelihood of disruptions and enhancing overall functionality. Enhancing Operations with Cloud Desktop Integration Setting up the Bright Data MCP server on a cloud desktop offers several advantages, particularly for users managing complex or large-scale data extraction projects. The process involves editing the ` file to include your API token and other critical settings. Secure configuration storage : Safeguard sensitive settings and access them from any location. : Safeguard sensitive settings and access them from any location. Controlled environment : Execute complex scraping tasks without impacting the performance of your local system. : Execute complex scraping tasks without impacting the performance of your local system. Scalability: Easily expand operations to handle larger datasets or more intricate workflows. By using a cloud desktop, you can create a reliable and scalable foundation for your data extraction activities, making sure consistent performance and security. The Evolving Potential of Gemini CLI As an open source tool, Gemini CLI continues to benefit from ongoing development and community contributions. Regular updates introduce new features, enhance compatibility with MCP servers, and improve overall functionality. For professionals seeking efficient and scalable data extraction solutions, Gemini CLI remains a valuable and adaptable resource. By staying informed about updates and actively engaging with the tool's development, you can ensure that your data extraction workflows remain at the forefront of technological advancements. Media Credit: Prompt Engineering Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

5 AI Regulation Lies Everyone Must Stop Believing
5 AI Regulation Lies Everyone Must Stop Believing

Forbes

time27-06-2025

  • Business
  • Forbes

5 AI Regulation Lies Everyone Must Stop Believing

Most professionals wrongly assume AI regulation only affects tech developers, but current laws ... More actually target end users and businesses across all industries. AI is evolving and being adopted at lightning speed, and laws designed to keep us safe aren't keeping pace. You probably knew that already. But when it comes to AI regulation, there are lots of other ideas and conceptions that might not be so water-tight. The topic of AI regulation is a vast one that covers everything from different attitudes to privacy and human rights to the challenges of enforcing rules on how we use tools that are often open-source and readily accessible to anyone. However, understanding its implications is becoming increasingly important as we find ourselves having to make decisions about how we use AI in business and our personal lives. So here's my overview of five misconceptions that should be put to bed about the way AI is regulated if we want to understand how it will affect us, our business or society at large. AI Regulations Only Matter To Techies Many people's first assumption is that AI regulation is something that only AI engineers, data scientists and developers have to worry about. But with AI systems increasingly becoming embedded in business functions from marketing and HR to customer service, everyone now has obligations to ensure it's used legally and safely. It's important to remember that AI regulation we've seen so far, such as the EU, Chinese, and assorted US legislations, mainly impose regulation on those using AI rather than those developing it. Regardless of a professional's role within their organization, they will have to understand the rules and safeguards. This means understanding what data they're using to do their job, what's being done with it, and what needs to be done to make sure they stay on the right side of the law. AI Regulation Stifles Innovation There's a strong feeling among sections of the AI community that regulation stifles innovation. By imposing rules, the argument goes, AI developers are restricted in what they can build, and users are restricted in what they can do. The counter-argument is that regulation actually fosters innovation—by creating a level playing field and giving businesses confidence they're working within legal and ethical frameworks. By putting up guard rails around potentially dangerous or harmful use cases, regulation helps industries build trust with customers and experiment safely with new ideas. In practice, this is a balancing act, with regulators aiming to facilitate innovation while mitigating risk. However, looking at regulation as anti-innovation or unnecessary interference is a frequent and dangerous misconception. AI Regulations Control What Can Be Developed So we touched on this before, but really, it deserves to be its own point. A layman might assume that AI regulation is something imposed on big AI developers like Google or OpenAI that somehow restricts what they can build. In reality, most legislation we've seen so far has focused on the impact of AI and what can be done by those using it. The EU act, for example, bans or strictly regulates "high risk" AI activities like social scoring, real-time biometric identification of people in public places, and exploiting vulnerable groups of people. Other use cases, such as facial recognition, are limited to law enforcement and subject to strict guidelines. So, with developers still essentially free to build incredibly powerful models, the lesson is that just because something is possible with AI, it doesn't mean it's legal. Ultimately, end users take responsibility for the results of their actions. Geopolitics Overrides AI Legislation Back in 2017, Vladimir Putin said that whoever becomes the leader of AI would be the leader of the world. His prediction seems to be on track so far. With the advantages in warfare, intelligence, and economy that AI maturity will grant a nation-state, why would leaders put up barriers to achieving it? In reality, it's because they understand that it can regulate itself and be used as a tool to further their political and geopolitical agendas. The EU, for example, emphasizes the importance of preserving privacy and fundamental citizen rights in its legislation, whereas China's policies focus on maintaining social harmony and law enforcement. In the US, legislators have shown that increasing the competitiveness of its domestic AI industry is a priority. Taking an early lead in the AI arms race gives nations the opportunity to shape the direction the AI market will take in the next 10 years and beyond, and regulation is a key tool for getting this done. AI Can't Be Regulated Because It's A 'Black Box' Even the creators of foundation AI systems, such as the large language models (LLMs) powering ChatGPT, don't know exactly how they work. And if no one knows how they work, how can we impose rules on them? Will they even follow them? Perhaps they could even pretend to follow them (alignment faking) to lull us into a false sense of security and advantage themselves in some way. These are questions that frequently come up when the pros and cons of AI regulation are being debated. However, as we've covered, regulation isn't designed to control or limit the development or capability of AI. It's to put guardrails around potentially dangerous behaviors. By regulating with a focus on outcomes, we don't have to fully understand AI in order to regulate it. Ensuring that the regulatory frameworks we're building now are robust is critical for ensuring we can deal with the implications of more advanced and potentially dangerous AI in the future. Why Everyone Should Understand How AI Regulation Will Affect Them It isn't just governmental policymakers and computer scientists that need to understand how and why AI is regulated and how regulations affect them. As AI becomes increasingly embedded in our lives, understanding the rules and why they exist will become critical to capitalizing on the opportunities it offers in a safe and ethical way.

Dubai real estate: Dubizzle Group hires 80+ data scientists to boost AI operations
Dubai real estate: Dubizzle Group hires 80+ data scientists to boost AI operations

Arabian Business

time24-06-2025

  • Business
  • Arabian Business

Dubai real estate: Dubizzle Group hires 80+ data scientists to boost AI operations

Dubizzle Group has recruited over 80 data scientists and engineers to expand its Business Intelligence and data science operations, the company announced today. The recruitment drive adds to the group's existing pool of specialists as it seeks to strengthen its position in the MENA PropTech sector. The group operates real estate platforms Bayut and dubizzle and is making the investment to future-proof its technology ecosystem and establish industry benchmarks. The expansion aligns with the UAE government's efforts to integrate AI across various sectors. UAE PropTech unicorn Dubizzle Group expands AI team with 80 new recruits Dubizzle Group is developing one of the most specialised BI teams in the regional digital classifieds and PropTech landscape, with plans for further growth. The recruitment supports the company's vision to create AI-enabled understanding of user behaviour, market dynamics and platform optimisation. The developments will power personalised experiences, predictive tools and performance insights for property seekers, agents and developers. The group currently operates over 70 deployed AI models and 64 proprietary models, generating an average of 49 million predictions monthly. This AI infrastructure powers core products including BayutGPT, TruEstimate™ and Sell with AI, which are transforming how users search, evaluate and engage with property and classified listings on both platforms. 'We're building more than just portals. We're shaping the future of how people interact with classifieds in the digital age. The expansion of manpower in our tech ecosystem is not just a hiring milestone—it's a strategic investment in making Bayut and dubizzle the most intelligent platforms in the region. This is just the beginning, we will continue to expand this workforce and bring in talented professionals to enhance our tech capabilities. Our ambition is to stand shoulder-to-shoulder with global benchmarks, and in many ways, set new ones,' Haider Ali Khan, CEO of Dubizzle Group MENA said. The talent expansion coincides with government announcements to include AI-based learning in curricula, incorporating understanding of the technology from an early stage. The group has rolled out features including TruEstimate™, TruBroker™, Sell with AI, and BayutGPT, demonstrating AI applications in real estate transactions and decision-making. These innovations have redefined how property seekers explore listings and how agents build trust and visibility. The group's BI team now represents one of the largest and most specialised data units in the MENA digital classifieds and PropTech landscape.

Five essential skills for building AI-ready teams
Five essential skills for building AI-ready teams

Entrepreneur

time09-06-2025

  • Business
  • Entrepreneur

Five essential skills for building AI-ready teams

AI is developing at a rapid pace and transforming the way global industries operate. As companiesaccelerate AI adoption in order to stay competitive and reap its potential benefits, the urgency forbuilding AI-ready capabilities in the organisation is increasing. Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur United Kingdom, an international franchise of Entrepreneur Media. The true value of AI-based solutions depends on the teams who understand, challenge, embrace and integrate it wisely. In my new book, Artificial Intelligence For Business, I highlight the impact of AI on the future of work, specifically the skills gaps and job displacements, as well as future essential skills required in global organisations. For business leaders, building AI-ready teams means more than just hiring technical experts or data scientists. Success in the AI business landscape means upskilling the workforce to develop five essential capabilities that will enable people to thrive. AI literacy and understanding Knowledge and understanding of AI can seem overwhelming, particularly to those in non-technical roles who may struggle with the constant flood of information about large language models, Python codes and AI platform functionalities. While not everyone needs to have a deep technical understanding for how to develop AI-based solutions, they should understand what AI can do and where its limitations lie. AI literacy goes beyond a mere understanding of AI technologies. It involves building foundational understanding of its context and value, as well as the ability to question its design and implementation. AI literacy should be developed across all teams in an organisation, including how AI works, the different types of AI solutions, how data is used, where bias can creep in, and what real world applications look like in the relevant industry. Building AI literacy begins with organisational education and training programs that offer executive- level understanding of AI capabilities, limitations and risks, as well as industry-specific applications. Additionally, hands-on experience and real-world applications are critical in developing an understanding AI in a business context. The aim is to raise the level of understanding to ensure every AI-related business decision is made with awareness and purpose. Critical thinking and data scepticism As we increasingly apply AI-based technologies in our daily business, the outcomes can be quite compelling. The potential productivity gains and scale of benefit are driving organisations to implement AI-based solutions across various business functions. The outputs of AI tools may appear clean and professional, but may not always be rooted in accuracy or truth. In addition, there may be hidden biases that could be detrimental, particularly if the outputs are used in critical decision- making processes. AI-ready teams need to develop critical thinking skills – the ability to analyse AI outputs, identify anomalies or biases, and make well-informed decisions relating to its use. As organisations increasingly use AI-based systems, there is a risk of over-reliance and trust on its output, without truly understanding how the outcomes are derived. This is where critical thinking becomes indispensable. Building internal capabilities in 'data scepticism', or the ability to challenge assumptions, examine how models are trained, and identify potential errors, anomalies or biases in the output, is critical for organisations. Although a certain level of technical competency may be required to deep dive into the AI-system capabilities, a basic level of confidence to raise concerns and questions across all teams interacting with AI solutions and outputs will be essential for organisations. Deep technical training is not required for this. More importantly, leadership teams should prioritise building an organisational culture where employees are encouraged to question and analyse AI- generated insights. For example, establishing scenario-based exercises, diverse team discussions and formalised feedback loops will help sharpen critical thinking skills across the organisation. Human-machine collaboration As the capabilities of AI-based technologies rapidly advance, the question of whether to replace human resources with AI is becoming increasingly dominant in the global business landscape. In recent months, we have seen several global organisations make headlines as the decision to replace laid-off workers with AI and automation takes centre stage. This includes brands such as Klarna, UPS, Duolingo, Google and Salesforce, among many others. In my experience, the integration of new technologies does not automatically mean replacing people. As we have observed over decades of industrial revolution, technology enables shifts in working environments, taking over tasks and pushing human resources to more complex or different types of work. Albeit AI development is significantly more rapid and its capabilities enable more sophisticated tasks, the cycle of shifting work remains the same. In the AI age, this means creating new kinds of teams where humans and intelligent systems collaborate effectively to deliver cohesive and sophisticated work at an accelerated pace. To support this, companies should focus on role redesign, process mapping, and experimentation with AI tools in real workflows. Encourage cross-functional collaboration - between business, tech, and data teams - to break down silos and co-create solutions. The key is to help people see AI as an assistant, not a threat. Ethical reasoning and responsible innovation With the rise of AI application in business comes a surge of ethical concerns and risks, including bias, data privacy and over-reliance of AI for critical decision making. To leverage AI-based technologies effectively, organisations cannot afford to overlook these concerns, particularly considering the developing regulatory scrutiny and fragility of consumer trust. Every team should receive education and training on the ethical concerns and challenges of AI application in business, including the ability to recognise biases in data and outputs, understanding explainability requirements, and making inclusive decisions. Responsible use of AI should be a foundational part of the organisational culture. Realistically, this goes beyond formal training programs to enable successful adoption in organisations. Transparent communication, open dialogue, best practices and use cases are needed to explore potential unintended consequences and ensure responsible use is top of mind for all teams. Ethical reasoning should not be designed to slow innovation, but ensure that it is able to flourish within the space of safe and responsible use for the business. Adaptive learning and growth mindset One of the most foundational skills for an AI-ready team is adaptability. Exponential technologies, particularly AI, are developing rapidly and constantly changing. The most valuable skill in an AI-ready organisation is not the knowing everything, but being curious, open to change and continuously willing to learn. Embedding this growth mindset in how teams work and collaborate gives employees permission to explore new capabilities, learn quickly from failure, and experiment with new tools and solutions within a safe environment. In the current AI age, organisations need to prioritise investments in microlearning platforms that are able to encourage continuous rapid learning, knowledge sharing and reward curiosity. Critically, leadership teams should model this mindset, demonstrating the willingness to evolve and rethink traditional assumptions and limitations. Adaptability will ensure the organisation does not just survive the era of AI transformation, but thrives in it. AI-readiness goes beyond training programs, certifications and tools proficiency. It is truly a team- wide capability that requires sustainable investment in people. The future of work is not only impacted by the rapid development of AI, but how intelligently organisations are able to prepare the workforce to embrace it responsibly.

Predictive AI Must Be Valuated – But Rarely Is. Here's How To Do It
Predictive AI Must Be Valuated – But Rarely Is. Here's How To Do It

Forbes

time27-05-2025

  • Business
  • Forbes

Predictive AI Must Be Valuated – But Rarely Is. Here's How To Do It

Most predictive AI projects neglect to estimate the potential profit – a practice known as ML ... More valuation – and that spells project failure. Here's the how-to. To be a business is to constantly work toward improved operations. As a business grows, this usually leads to the possibility of using predictive AI, which is the kind of analytics that improves existing, large-scale operations. But the mystique of predictive AI routinely kills its value. Rather than focusing on the concrete win that its deployment could deliver, leaders get distracted by the core tech's glamor. After all, learning from data to predict is sexy. This in turn leads to skipping a critical step: forecasting the operational improvement that predictive AI operationalization would deliver. As with any kind of change to large-scale operations, you can't move forward without a credible estimation of the business improvement you stand to gain – in straightforward terms like profit or other business KPIs. Not doing so makes deployment a shot in the dark. Indeed, most predictive AI launches are scrubbed. So why do most predictive AI projects fail to estimate the business value, much to their own demise? Ultimately, this is not a technology fail – it's an organizational one, a glaring symptom of the biz/tech divide. Business stakeholders delegate almost every aspect of the project to data scientists. Meanwhile, data scientists as a species are mostly stuck on arcane technical metrics, with little attention to business metrics. The typical data scientist's training, practice, shop-talk and toolset omits business metrics. Technical metrics define their comfort zone. Estimating the profit or other business upside of deploying predictive AI – aka ML valuation – is only a matter of arithmetic. It isn't the "rocket science" part, the ML algorithm that learns from data. Rather, it's the much-needed prelaunch stress-testing of the rocket. Say you work at a bank processing 10 million credit card and ATM card transactions each quarter. With 3.5% of the transactions fraudulent, the pressure is on to predictively block those transactions most likely to fall into that category. With ML, your data scientists have developed a fraud-detection model that calculates a risk level for each transaction. Within the most risky 150,000 transactions – that is, the 1.5% of transactions that are considered by the model most likely to be fraud – 143,000 are fraudulent. The other 7,000 are legitimate. So, should the bank block that group of high-risk transactions? Sounds reasonable off the cuff, but let's actually calculate the potential winnings. Suppose that those 143,000 fraudulent transactions represent $18,225,000 in charges – that is, they're about $127 each on average. That's a lot of fraud loss to be saved by blocking them. But what about the downside of blocking them? If it costs your bank an average of $75 each time you wrongly block due to cardholder inconvenience – which would be the case for each of the 7,000 legit transactions – that will come to $525,000. That barely dents the upside, with the net win coming to $17,700,000. So yeah, if you'd like to gain almost $18 million, then block those 1.5% most risky transactions. This is the monetary savings of fraud detection, and a penny saved is a penny earned. But that doesn't necessarily mean that 1.5% is the best place to draw the line. How much more might we save by blocking even more? The more we block, the more lower-risk transactions we block – and yet the net value might continue to increase if we go a ways further. Where to stop? The 2% most risky? The 2.5% most risky? To navigate the range of predictive AI deployment options, you've just got to look at it: A savings curve comparing the potential money saved by blocking the most risky payment card ... More transactions with fraud-detection models. The performance of three competing models is shown. This shows the monetary win for a range of deployment options. The vertical axis represents the money saved with fraud detection – based on the same kind of calculations as those in the previous example – and the horizontal axis represents the portion of transactions blocked, from most risky (far left) to least risky (far right). This view has zoomed into the range from 0% to 15%, since a bank would normally block at most only the top, say, two or three percent. The three colors represent three competing ML models: two variations of XGBoost and one random forest (these are popular ML methods). The first XGBoost model is the best one overall. The savings are calculated over a real collection of e-commerce transactions. So was the previous example's calculations. Let's jump to the curve's peak. We would maximize the expected win to more than $26 million by blocking the top 2.94% most risky transactions according to the first XGBoost model. But this deployment plan isn't a done deal yet – there are other, competing considerations. First, consider how often transactions would be wrongly blocked. It turns out that blocking that 2.94% would inconvenience legit cardholders an estimated 72,000 times per quarter. That adverse effect is already baked into the expected $26 million estimate, but it could incur other intangible or longer-term costs; the business doesn't like it. But the relatively flatness that you can see near the curve's peak signals an opportunity: If we block fewer transactions, we could greatly reduce the expected number wrongly blocked with only a small decrease in savings. For example, it turns out that blocking 2.33% rather than 2.94% cuts the number of estimated bad blocks in half to 35,000, while still capturing an expected $25 million in savings. The bank might be more comfortable with this plan. As compelling as these estimated financial wins are, we must take steps to shore up their credibility, since they hinge on certain business assumptions. After all, the actual win of any operational improvement – whether driven by analytics or otherwise – is only certain after it's been achieved, in a "post mortem" analysis. Before deployment, we're challenged to estimate the expected value and to demonstrate its credibility. One business assumption within the analysis described so far is that unblocked fraudulent transactions cost the bank the full magnitude of the transaction. A $100 fraudulent transaction costs $100 (while blocking it saves $100). And a $1,000 fraudulent transaction indeed costs ten times as much. But circumstances may not be that simple, and they may be subject to change. For example, certain enforcement efforts might serve to recoup some fraud losses by investigating fraudulent transactions even after they were permitted. Or the bank might hold insurance that covers some losses due to fraud. If there's uncertainty about exactly where this factor lands, we can address it by viewing how the overall savings would change if such a factor changed. Here's the curve when fraud costs the bank only 80% rather than 100% of each transaction amount: The same chart, except with each unblocked fraudulent transaction costing only 80% of the amount of ... More the transaction, rather than 100%. It turns out, the peak decreases from $26 million down to $20 million. This is because there's less money to be saved by fraud detection when fraud itself is less costly. But the position of the peak has moved only a little: from 2.94% to 2.62%. In other words, not much doubt is cast upon where to draw the decision boundary. Another business assumption we have in place is the cost of wrongly blocking, currently set at $75 – since an inconvenienced cardholder will be more likely to use their card less often (or cancel it entirely). The bank would like to decrease this cost, so it might consider taking measures accordingly. For example, it could consider providing a $10 "apology" gift card each time it realizes its mistake – an expensive endeavor, but one that might turn out to decrease the net cost of wrongly blocking from $75 down to $50. Here's how that would affect the savings curve: The same chart, except with each wrongly-blocked transaction costing only $50, rather than $75. This has increased the peak estimated savings to $28.6 million, and moves that peak from 2.94% up to 3.47%. Again, we've gained valuable insight: This scenario would warrant a meaningful increase in how many transactions are blocked (drawing the decision boundary further to the right), but would only increase profit by $2.6 million. Considering that this guesstimated cost reduction is a pretty optimistic one, is it worth the expense, complexity and uncertainty of even testing this kind of "apology" campaign in the first place? Perhaps not. For a predictive AI project to defy the odds and stand a chance at successful deployment, business-side stakeholders must be empowered to make an informed decision as to whether, which and how: whether the project is ready for deployment, which ML model to deploy and with what decision boundary (percent of cases to be treated versus not treated). They need to see the potential win in terms of business metrics like profit, savings or other KPIs, across a range of deployment options. And they must see how certain business factors that could be subject to change or uncertainty affect this range of options and their estimated value. We have a name for this kind of interactive visualization: ML valuation. This practice is the main missing ingredient in how predictive AI projects are typically run. ML valuation stands to rectify today's dismal track record for predictive AI deployment, boosting the value captured by this technology up closer to its true potential. Given how frequently predictive AI fails to demonstrate a deployed ROI, the adoption of ML valuation is inevitable. In the meantime, it will be a true win for professionals and stakeholders to act early, get out ahead of it and differentiate themselves as a value-focused practitioner of the art.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store