Latest news with #LMStudio


Arabian Post
25-06-2025
- Business
- Arabian Post
Privacy‑First Code Editing Gains Traction
Void, a Y Combinator‑backed, open‑source AI code editor, has entered beta testing, promising developers full control over their code and data while delivering advanced AI capabilities. Launched this month, it positions itself as a credible contender to proprietary rivals like Cursor and GitHub Copilot. Developers can now choose whether to host models locally via tools like Ollama and LM Studio, or connect directly to APIs for Claude, GPT, Gemini and others, bypassing third‑party data pipelines entirely. Built as a fork of Visual Studio Code, Void offers compatibility with existing themes, extensions and key‑bindings. It supports familiar developer workflows—integrated terminal, Git tools, language‑server support—and overlays powerful AI‑driven features such as inline coding suggestions, chat assistant capabilities, and agent modes that understand a full codebase. Unlike most proprietary editors, Void is fully open source, enabling users to inspect and modify prompts, index their own files, and control how AI interacts with their repositories. The founders—twins Andrew and Mathew Pareles—come from Cornell and previously launched a platform for technical interview preparation. Their vision: create an open‑source IDE compatible with Cursor and Copilot features, without locking user data into a closed backend. A LinkedIn preview indicates upcoming support for a third‑party extension marketplace, with integrations including Greptile for codebase search and DocSearch for documentation retrieval. ADVERTISEMENT Developers testing Void praise its responsiveness and privacy focus. One review demonstrates connecting Void to a local Gemma 3 12B LLM via LM Studio, allowing summarisation and inline code queries without data leaving their machine. Performance reportedly improves significantly on proper GPU drivers. On Hacker News and Reddit, users highlight the freedom to self‑host AI models and steer clear of vendor‑locked services. Some caution that deep integration with the VS Code UI may present long‑term maintenance challenges. Meanwhile, competitors press ahead. Cursor, a proprietary AI IDE developed by Anysphere Inc, recently rolled out version 1.0 on 4 June 2025 after raising its valuation to US $9 billion in May. It features agent‑mode tasks and SOC 2 certified privacy options. However, its closed‑source nature means all processing occurs on remote backends, which some developers view as a risk. Security analyses of AI‑generated code caution that tools like Copilot and Cursor can introduce vulnerabilities. An empirical study found that nearly 30 per cent of AI‑generated Python code contained security issues, such as injection flaws, underscoring the need for developer scrutiny. Void mitigates some of these concerns by granting users full transparency and editability over prompts and code flows. This setup may help reduce hallucinated or insecure output, provided developers systematically inspect and test the results. Academic research also reveals broader concerns: open‑source extensions, including AI‑powered ones, have sometimes exposed sensitive keys in IDE environments. Void's model, which processes data locally unless explicitly routed to trusted APIs, could lessen this risk compared to cloud‑first tools whose extension frameworks may inadvertently leak secrets. Void's roadmap includes planned features like multi‑file operations, checkpointing for AI‑powered edits, and visual diff tools. Community contributions are encouraged via GitHub, and weekly contributor meetups are hosted on Discord. Adoption so far has drawn interest from privacy‑focused and FOSS‑oriented developers who value self‑hosting. Questions remain about long‑term maintainability, performance optimisation, and whether the editor can match the polish and ecosystem of its proprietary competitors. However, early signs indicate strong potential for reshaping the AI‑IDE landscape by prioritising transparency and user control over convenience and lock‑in.


Biz Bahrain
14-06-2025
- Biz Bahrain
New malware posing as an AI assistant steals user data
Kaspersky Global Research & Analysis Team researchers have discovered a new malicious campaign which is distributing a Trojan through a fake DeepSeek-R1 Large Language Model (LLM) app for PCs. The previously unknown malware is delivered via a phishing site pretending to be the official DeepSeek homepage that is promoted via Google Ads. The goal of the attacks is to install BrowserVenom, a malware that configures web browsers on the victim's device to channel web traffic through the attackers servers, thus allowing to collect user data – credentials and other sensitive information. Multiple infections have been detected in Brazil, Cuba, Mexico, India, Nepal, South Africa and Egypt. DeepSeek-R1 is one of the most popular LLMs right now, and Kaspersky has previously reported attacks with malware mimicking it to attract victims. DeepSeek can also be run offline on PCs using tools like Ollama or LM Studio, and attackers used this in their campaign. Users were directed to a phishing site mimicking the address of the original DeepSeek platform via Google Ads, with the link showing up in the ad when a user searched for 'deepseek r1'. Once the user reached the fake DeepSeek site, a check was performed to identify the victim's operating system. If it was Windows, the user was presented with a button to download the tools for working with the LLM offline. Other operating systems were not targeted at the time of research. After clicking on the button and passing the CAPTCHA test, a malicious installer file was downloaded and the user was presented with options to download and install Ollama or LM Studio. If either option was chosen, along with legitimate Ollama or LM Studio installers, malware got installed in the system bypassing Windows Defender's protection with a special algorithm. This procedure also required administrator privileges for the user profile on Windows; if the user profile on Windows did not have these privileges, the infection would not take place. After the malware was installed, it configured all web browsers in the system to forcefully use a proxy controlled by the attackers, enabling them to spy on sensitive browsing data and monitor the victim's browsing activity. Because of its enforcing nature and malicious intent, Kaspersky researchers have dubbed this malware BrowserVenom. 'While running large language models offline offers privacy benefits and reduces reliance on cloud services, it can also come with substantial risks if proper precautions aren't taken. Cybercriminals are increasingly exploiting the popularity of open-source AI tools by distributing malicious packages and fake installers that can covertly install keyloggers, cryptominers, or infostealers. These fake tools compromise a user's sensitive data and pose a threat, particularly when users have downloaded them from unverified sources,' comments Lisandro Ubiedo, Security Researcher with Kaspersky's Global Research & Analysis Team. To avoid such threats, Kaspersky recommends: • Check the addresses of the websites to verify that they are genuine and avoid scam. • Download offline LLM tools only from official sources (e.g., • Avoid using Windows on a profile with admin privileges. • Use trusted cyber security solutions to prevent malicious files from launching.