
Free coding help from Gemini code assist now available
UAE
RELATED COMPANIES
GitHub Enterprise
Dubai, UAE – The public preview of Gemini Code Assist for individuals, a free version of the AI-coding assistant Gemini Code Assist, has now been announced. Offering AI-assisted coding help to developers worldwide, this free version provides generous usage limits and code review assistance.
Recent research indicates that over 75% of developers utilize AI in their daily work. While well-resourced organizations are empowering their engineering teams with the latest AI capabilities, that level of tooling hasn't always been accessible to students, hobbyists, freelancers, and startups. With a worldwide population of developers forecasted to grow to 57.8 million by 2028, access to AI tools should be readily available, so users can start building with the standard digital tools of the future.
To bridge that gap, Gemini Code Assist for individuals is now globally accessible and powered by Gemini 2.0. It supports all public domain programming languages and is optimized specifically for coding. The Gemini 2.0 model underwent fine-tuning using a large dataset of real-world coding use cases, resulting in highly effective AI-generated recommendations. Unlike other free coding assistants with limited usage, such as 2,000 code completions per month, Gemini Code Assist offers up to 180,000 code completions monthly. This high ceiling ensures even the most demanding developers are unlikely to reach the limit.
Beyond code generation, Gemini Code Assist also facilitates improved code quality. Recognizing the time-consuming nature of code reviews, the public preview of Gemini Code Assist for GitHub provides free, AI-powered code reviews for both public and private repositories.
AI coding assistance
To further enhance accessibility, the free version of Gemini Code Assist is available in Visual Studio Code and JetBrains IDEs. This integration provides the same capabilities previously offered to businesses, and currently available in Firebase and Android Studio, directly within developers' working environments. Users can now conveniently learn, create code snippets, debug, and modify applications without switching between windows or copying information from disparate sources.
With a generous usage limit of up to 90 times more code completions per month than other free assistants, Gemini Code Assist caters to a wide range of developers. Students and professionals alike can benefit without fear of hitting usage caps or chat limits interrupting their workflow.
Gemini Code Assist for individuals features a large token context window, supporting up to 128,000 tokens in chat. This allows developers to utilize large files and provide Gemini Code Assist with a comprehensive understanding of their codebases.
The chat feature within Gemini Code Assist enables developers to focus on creative aspects of development by automating repetitive tasks like writing comments or generating automated tests.
Developers can use natural language to generate, explain, and improve code. For example, a freelance web developer can request a simple HTML form, while someone automating routine tasks can ask for a script that sends daily weather forecasts or explanations of Python code snippets.
Automating repetitive code reviews with Gemini Code Assist for GitHub
With Gemini Code Assist for GitHub, developers get a powerful helping hand that can detect stylistic issues and bugs and automatically suggest code changes and fixes. Offloading basic reviews to an AI agent can help make code repositories more maintainable and improve quality, allowing developers to focus on more complex tasks. It's available directly in GitHub, where most open-source developers post and review code, via a GitHub app.
Different developer teams may also have different best practices, coding conventions and preferred frameworks and libraries. To address this need, Gemini Code Assist for GitHub supports custom style guides for code reviews. Each team can describe which instructions Gemini should follow when reviewing code in a .gemini/styleguide.md file in their repository. That way, Gemini tailors its code reviews to the needs of the repository.
Getting started
Getting started with Gemini Code Assist is simple. Users can sign up quickly with a personal Gmail account and install the tool in Visual Studio Code, GitHub, or JetBrains IDEs.
Feedback from the public preview will be used to further refine Gemini Code Assist. Users can submit feedback directly through the 'Send feedback' form in the IDE or in GitHub.
For those requiring advanced features like productivity metrics, customized AI responses based on private source code repositories, or integrations with Google Cloud services like BigQuery, Gemini Code Assist Standard or Enterprise are available.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arabian Post
2 days ago
- Arabian Post
Rostec Readies Ruble‑Backed RUBx for National Roll‑out
Russian state-owned conglomerate Rostec is set to introduce RUBx—a stablecoin pegged directly to the ruble—and RT‑Pay, a payment platform, before the end of 2025. These innovations aim to empower businesses and individuals with secure and compliant digital transaction tools. RUBx will be launched on the Tron blockchain and strictly pegged 1:1 to the ruble, backed by legally governed obligations maintained in a treasury account. Rostec will act as the sole issuer, with the structure codified by law to ensure robustness and transparency. The smart-contract code is expected to be made available on GitHub and independently audited by CertiK, reinforcing compliance with Russian financial regulations. RT‑Pay, designed to integrate seamlessly with Russia's existing banking rails, will enable transactions outside standard banking hours and facilitate smart-contract functions. Rostec says it has built the platform to adhere to anti‑money‑laundering and counter‑terrorism financing norms and to satisfy all Bank of Russia requirements. ADVERTISEMENT A phased launch is anticipated, initially targeting sectors where efficiency gains can be rapidly realised before expansion to broader corporate and retail use. The move dovetails with Russia's broader strategy to enhance crypto infrastructure, including the central bank's separate digital ruble pilot and the recent authorisation for institutions to offer crypto-related investment products. Rostec, known for its significant role in defence and high-tech manufacturing, leveraged its credentials as a trusted state entity to assure users of RUBx's legitimacy. 'Each RUBx is backed by real obligations in rubles,' Rostec officials emphasise, underscoring the token's legal anchor. This initiative aligns with a trend of expanding digital payment solutions amid geopolitical and economic pressures. Russian financial institutions such as Sberbank and Moscow Exchange have already introduced crypto-linked offerings, and some state entities reportedly used crypto instruments to facilitate oil trade and bypass sanctions. Economists and fintech experts note that the introduction of state-backed digital infrastructure like RUBx and RT‑Pay marks a departure from pilot programs. They highlight the potential for stablecoins—with legal and technological safeguards—to provide a credible alternative to traditional payment systems. By utilising a blockchain infrastructure like Tron, Rostec leans on a mature ecosystem, which may support rapid adoption. Nevertheless, independent analysts caution that integration risks remain. They point to the need for robust cybersecurity measures, systemic risk controls, and interoperability standards with domestic and international payment systems. Successful implementation will require coordinated efforts among regulators, banks, and end users.


Arabian Post
23-06-2025
- Arabian Post
Hyperscalers Form ASIC Coalition to Challenge NVIDIA Dominance
Cloud computing giants AWS, Google, Microsoft, Meta and OpenAI are accelerating in-house development of custom application‑specific integrated circuits, aiming to erode NVIDIA's dominance in high‑performance AI datacentres. Industry reports highlight a projected annual growth rate of around 50% for ASIC purchases by hyperscalers, marking a strategic pivot in the AI hardware landscape. NVIDIA's premium-priced solutions—including Blackwell GPUs—have placed pressure on hyperscalers to secure more cost‑efficient, scalable systems. With single GPUs ranging from $70,000 to $80,000 and fully configured servers tallying up to $3 million, these companies are betting on internal design to manage costs and supply risks. Amazon Web Services has notably moved ahead with its in‑house chips—Trainium for training and Inferentia for inference—reporting 30 – 40% greater cost efficiency compared with NVIDIA hardware. AWS is also collaborating with Marvell and Taiwan's Alchip on next‑generation Trainium versions. Internal indications suggest AWS may deploy as many as half‑a‑million ASIC units in its data centres, an expansive scale‑up that could rival NVIDIA's installed base. ADVERTISEMENT Google, meanwhile, has scaled its TPU v6 Trillium chips, transitioning from single‑supplier to dual‑supplier design by partnering with MediaTek. With deployments reportedly hitting 100,000‑unit clusters to support Gemini 2.0 workloads, Google claims competitive cost-performance metrics relative to NVIDIA GPUs. Microsoft's forthcoming Maia 200 chip, co‑designed with GUC using TSMC's 3 nm process, is scheduled for commercial release in 2026. Meta's Meta Training and Inference Accelerator, developed alongside Broadcom, Socionext and GUC, is expected in early 2026 on TSMC's 3 nm node, featuring HBM3e memory—another step towards self‑sufficiency in AI compute. OpenAI has also announced a proprietary training processor, with mass production anticipated at TSMC by 2026. Market projections reflect this tectonic shift. ASICs are poised to claim between $100 billion and $130 billion of custom AI accelerator spend by 2030, with Broadcom estimating a market of $60 billion to $90 billion by 2027. Traditional ASIC powerhouses—Broadcom, Marvell, MediaTek, Alchip and GUC—are experiencing surging demand as they support hyperscaler transitions. Despite these developments, hyperscalers continue to reserve capacity for NVIDIA chips, recognising the GPU giant's entrenched ecosystem—especially its CUDA software stack—and the steep technical barriers to immediate elimination of GPU dependencies. The trend resembles historical transitions in specialised compute. Just as cryptocurrency mining moved from GPUs to ASICs for lower costs and greater efficiency, hyperscalers now aim to fragment the AI compute supply chain and diversify their hardware portfolios. ADVERTISEMENT TSMC stands to benefit significantly, serving as the foundry for both NVIDIA's mass‑market GPUs and hyperscaler ASICs. Its chairman emphasises that the competition between NVIDIA and cloud‑designed chips is ultimately beneficial to TSMC, ensuring a broad customer base. Broadcom has emerged as a frontrunner, with its ASIC and networking chipset revenues soaring 220% to $12.2 billion in 2024. Hyperscalers are investing in clusters featuring up to one million custom XPUs over open‑Ethernet networks—an architecture that places Broadcom and Marvell in strategic positions. Networking ASICs are expected to account for 15–20% of AI data‑centre silicon budgets, rising from the 5–10% range. Revenue trends reflect these structural shifts. Marvell has secured a multi‑year AI chip deal with AWS and anticipates its AI silicon revenue jumping from $550 million in 2024 to over $2.5 billion in 2026. Broadcom, similarly, is redirecting significant investment toward hyperscaler ASIC demand. Nevertheless, NVIDIA retains a commanding lead in AI training and general‑purpose GPU compute. Its end‑to‑end platform—from hardware to software—remains deeply embedded in the AI ecosystem. Custom ASICs, by contrast, offer task‑specific gains but lack the breadth of software compatibility that NVIDIA enables. Analysts caution that the AI compute landscape is evolving toward a more fragmented, mixed‑architecture model combining GPUs and ASICs. Hyperscalers' shift signals strategic recognition of rising costs, supply constraints, and performance demands. Yet, they also underscore persistent obstacles: software ecosystem maturity, long development cycles, and the complexity of large‑scale deployment. Questions remain regarding the timeframe in which hyperscalers can meaningfully shift workloads away from NVIDIA GPUs. Industry roadmaps project new ASIC deployments through 2026–27. Analysts expect GPU market share erosion may begin toward the end of the decade, provided in-house ASICs deliver consistent performance and efficiency. The stage is set for a multi‑year contest in datacentre compute. NVIDIA faces increasing pressure from hyperscalers building bespoke chips to optimise workloads and control supply. The next evolution of AI infrastructure may look less like a GPU‑centric world and more like a diverse ecosystem of specialised, interlocking processors.


Arabian Post
13-06-2025
- Arabian Post
macOS Embraces Linux Containers with Native Support
Apple has unveiled a breakthrough open‑source framework, Containerization, during its WWDC 2025 keynote, enabling developers to create, run and manage Linux containers directly on macOS. The Container CLI, a companion command‑line tool, operates each container as a lightweight virtual machine, bypassing the need for third‑party platforms like Docker. This marks a strategic shift in Apple's support for cross‑platform workflows, particularly for developers working on server‑side and cloud‑native applications. Containers have become central to modern software engineering by packaging applications with all dependencies in a consistent, portable environment. Up to now, Mac users have typically relied on resource‑heavy, shared VMs to run Linux containers, often encountering sluggish performance and battery drain on Apple Silicon machines. Apple's solution leverages its own Virtualization framework and Apple Silicon optimisations to spin up sub‑second containers, each within its own minimal‑footprint VM. Isolation and security are core pillars of Containerization. Each container receives a dedicated IP address, entirely separate CPU and memory allocations, and performs directory sharing only when explicitly requested. The container's root filesystem omits core utilities, libc, and dynamic libraries by default — a deliberate measure to reduce the attack surface. The init process, vminitd, is written in Swift and acts as the VM's first process, handling IP assignment, filesystem mounting and process supervision. ADVERTISEMENT Performance gains are significant. By optimising the Linux kernel and exposing container filesystems as EXT4 block devices, Apple has achieved rapid cold‑boot speeds while maintaining low I/O overhead. Benchmarks suggest these containers outperform Docker Desktop in terms of startup time, memory footprint and CPU use on Apple Silicon systems. Technical details from GitHub show support for OCI‑compliant images, enabling compatibility with existing registries and Kubernetes systems. The container CLI mirrors familiar commands—pulling and running Alpine Linux images is as simple as typing container image pull alpine:latest. The project repo, licensed under Apache‑2.0, is written entirely in Swift, optimised for Apple Silicon and designed for community contribution. Apple's launch places Containerization among strong open‑source contenders such as Podman, containerd, Buildah and Rancher Desktop. Yet the per‑container VM model marks a departure from standard shared‑kernel container runtimes, offering enhanced isolation at the cost of slightly increased base resource use. Notably, critics and users on Reddit and in industry commentary have voiced curiosity about whether this approach outclasses lightweight VM tools like Orbstack or Lima. A potentially limiting factor is network isolation on macOS 15. Full network capabilities, including container‑to‑container traffic, are only available on the upcoming macOS 26, currently in beta and expected later in the year. Users on the earlier Sequoia release may experience restricted container networking or compatibility issues. Additionally, a Rosetta 2 bug affecting x86_64 processes in Linux VMs may impede workflows involving amd64 containers—a challenge both Apple and downstream projects like Podman are working to resolve. The timing of Containerization aligns with a broader developer toolkit refresh. Alongside this framework, Apple introduced Swift 6.2, Xcode 26 featuring LLM integration, and Game Porting Toolkit 3. This suite reflects a strategic push to consolidate development workflows across desktop, mobile, AI and cloud environments within the Apple ecosystem. Early adopters with Apple Silicon and access to macOS 26 beta are already testing the CLI and framework. Feedback is mixed: some praise the speed and security enhancements, while others caution that lack of full networking and Rosetta issues may restrict use in complex container orchestration setups. Apple's decision to open‑source Containerization is notable. It invites cross‑platform contributions, and standards compatibility via Swift and OCI means downstream projects could integrate with or build on the framework. If momentum grows, it could prompt a shift away from third‑party container tools on macOS, benefiting the entire developer ecosystem.