As 2025 comes to a close, SD Times is looking back at the top software development news stories of the year across the industry. Here are 10 of what we believe to be the biggest stories we covered throughout the year:
Linux Foundation forms Agentic AI Foundation to be new home for MCP, goose, and AGENTS.md
The Linux Foundation earlier this month announced that it is forming the Agentic AI Foundation (AAIF) to promote transparent and collaborative evolution of agentic AI.
Three major projects have been donated to the foundation at launch: Anthropic’s Model Context Protocol (MCP), Block’s goose, and OpenAI’s AGENTS.md. Additionally, AAIF member Obot.ai will donate its MCP Dev Summit events and podcast to the foundation.
The AAIF is launching with several members, including larger platinum members Amazon, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft and OpenAI; gold members Adyen, Arcade.dev, Cisco, Datadog, Docker, Ericsson, IBM, JetBrains, Okta, Oracle, Runlayer, SAP, Snowflake, Temporal, Tetrate, and Twilio Inc.; and silver members Chronosphere, Cosmonic, Elasticsearch, Eve Security, Hugging Face, Kubermatic, KYXStart, LanceDB, NinjaTech AI, Obot.ai, Prefect.io, Pydantic, Shinkai.com, Spectro Cloud, Stacklok, SUSE, Uber, WorkOS, and ZED.
“We are seeing AI enter a new phase, as conversational systems shift to autonomous agents that can work together. Within just one year, MCP, AGENTS.md and goose have become essential tools for developers building this new class of agentic technologies,” said Jim Zemlin, executive director of the Linux Foundation. “Bringing these projects together under the AAIF ensures they can grow with the transparency and stability that only open governance provides. The Linux Foundation is proud to serve as the neutral home where they will continue to build AI infrastructure the world will rely on.”
Microsoft announces release of .NET 10 (LTS)
Microsoft in November announced the release of .NET 10, the latest Long Term Support (LTS) release of .NET that will receive support for the next three years. As such, Microsoft is encouraging development teams to migrate their production applications to this version to take advantage of that extended support window.
This release features several performance improvements across the runtime, workloads, and languages. For instance, the JIT compiler has been improved with better inlining, method devirtualization, and improved code generation for struct arguments. Additionally, enhanced loop inversion and stack allocation strategies have been implemented to optimize runtimes.
Several language improvements were made to C# and F# as well. C# 14 introduces field-backed properties to simplify property declarations, extension properties and methods allow devs to add members to types they don’t own, and more. In F# 10, some of the improvements include the ability to use #warnon and #nowarn to enable or disable warnings in specific code sections and create publicly readable and privately mutable properties without verbose backing fields.
Wasm 3.0 standard is now officially complete
Version 3.0 of the WebAssembly (Wasm) standard was announced as complete in September and considered the “live” standard for Wasm. This announcement comes three years after the completion of Wasm 2.0, which had added many features like vector instructions, bulk memory operations, multiple return values, and simple reference types.
According to the Wasm W3C Community Group and Working Group, this is a substantial update compared to 2.0, and several of the features that are now available were in the works for six to eight years.
Wasm 3.0 supports 64-bit address space, meaning that memories and tables can use i64 in addition to i32 as their address space. This expands the available address space from 4 gigabytes to 16 exabytes, in theory. Hardware and use cases will now be the limiting factor, such as the web limiting 64-bit memory to 15 gigabytes. “The new flexibility is especially interesting for non-web ecosystems using Wasm, as they can support much, much larger applications and data sets now,” the working group wrote in a post.
Another new feature is the ability for a single module to declare and access multiple memories. It was previously possible for Wasm apps to use multiple memory objects at the same time, but only by declaring and accessing them in separate modules.
Wasm 3.0 also adds garbage collection, tail calls, exception handling, relaxed vector instructions, deterministic default behavior for instructions with non-deterministic results, and custom annotation syntax.
GitHub launches MCP Registry to provide central location for trusted servers
GitHub’s MCP Registry provides developers with a curated directory of MCP servers.
“If you’ve tried connecting AI agents to your development tools, you know the pain: MCP servers scattered across numerous registries, random repos, buried in community threads — making discovery slow and full of friction without a central place to go. Meanwhile, MCP server creators are worn out from publishing to multiple places and answering the same setup questions again and again,” GitHub wrote in a blog post.
Each server in the Registry is connected to its own GitHub repository, and they can be sorted by GitHub stars and community activity.
According to GitHub, this backing builds trust in specific MCP servers, leading to a healthier overall AI ecosystem.
Meta to donate React and React Native to the Linux Foundation
In October at React Conf, Meta announced that it would be donating its JavaScript UI libraries React and React Native to the Linux Foundation, which will be forming the React Foundation to support these libraries.
The React Foundation will include founding members Amazon, Callstack, Expo, Meta, Microsoft, Software Mansion, and Vercel. Its executive director will be Seth Webster, who is currently the head of React at Meta.
According to the Linux Foundation, once the new foundation is formed, Meta will contribute the libraries and then the new organization will provide governance, manage core infrastructure, organize events (including React Conf), and launch new programs that encourage community collaboration.
Java 25 LTS is now available with features like module import declarations, compact source files
Java 25 was released in September as the latest Long Term Support (LTS) version of the language, meaning it will be supported by Oracle for at least eight more years.
This release introduces several stable language features, including module import declarations, compact source files and instance main methods, and flexible constructor bodies.
Module import declarations allow developers to import all of the packages exported by a module, without that module needing to contain importing code. This functionality will make it easier for developers to reuse libraries, and also helps newer Java developers use third-party libraries and Java classes without needing to learn where they exist in a package hierarchy.
Compact source files and instance main methods allow students to write smaller programs without first needing to learn about language features designed for large codebases. “This has been previewed three or four times, and it’s going as a final feature now,” said Arimura. “It’s all about making the language more concise for new learners and students and people who want to write scripts in Java.”
Flexible constructor bodies allow input validations and safe computations to be done without invoking a constructor. According to Oracle, this change will enable constructors to be expressed more naturally, and also allows fields to be initialized before becoming visible to other code in the class.
PostgreSQL 18 adds asynchronous I/O to improve performance
PostgreSQL 18 was released in September, with several new features like asynchronous I/O, better post-upgrade performance, and improved text processing.
Asynchronous I/O allows PostgreSQL to issue multiple I/O requests at the same time rather than waiting for one to finish before starting the next. According to the PostgreSQL team, this improves overall throughput, and has resulted in performance gains of up to 3x in some scenarios.
Previously, PostgreSQL used operating system readahead mechanisms for data retrieval, but since the operating system didn’t have insight into database-specific access patterns, it couldn’t always anticipate what data would be required, resulting in suboptimal performance across many workloads. Asynchronous I/O was created to address that limitation, the team explained.
Red Hat announces Advanced Developer Suite
At its Summit event in May, Red Hat announced Red Hat Advanced Developer Suite, which the company said was designed to make developers more productive and their applications more secure.
The Advanced Developer Suite includes Red Hat Developer Hub, an internal developer portal (IDP) built on the Cloud Native Computing Foundation project Backstage. The Developer Hub has software templates for AI scenarios ready for deployment on OpenShift AI, the company wrote in its announcement. Those templates, it said, leverage Red Hat AI solutions “that consist of pre-architected and supported approaches to building and deploying AI-enabled services or components,” that developers can use without having to understand the technology used to implement it. Some common use cases for development include chatbots, audio-to-text, code generation and retrieval augmented generation.
Two other pieces of the Developer Suite are Red Hat Trusted Profile Analyzer and Trusted Artifact Signer. The Profile Analyzer is used to manage software bills of materials and vulnerabilities to give developers and DevOps teams the risk intelligence they need to ensure the applications are secure. The Artifact Signer offers cryptographic signing and artifact verification via the Sigstore project.
Docker Compose gets new features for building and running agents
Docker in July updated Compose with new features that will make it easier for developers to build, ship, and run AI agents.
Developers can define open models, agents, and MCP-compatible tools in a compose.yaml file and then spin up an agentic stack with a single command: docker compose up.
Compose integrates with several agentic frameworks, including LangGraph, Embabel, Vercel AI SDK, Spring AI, CrewAI, Google’s ADK, and Agno.
It also now integrates with Google Cloud Run and Microsoft Azure Container Apps Service, allowing agents to be deployed to serverless environments.
Upcoming Kotlin language features teased at KotlinConf 2025
At KotlinConf 2025 in May, JetBrains teased some of the new features that are coming to Kotlin in the next update to the language.
“From exciting language and ecosystem updates and robust AI tools that empower Kotlin development to major Kotlin Multiplatform milestones and a strategic partnership for the backend, KotlinConf 2025 brought a wave of news that set the tone for the year ahead,” JetBrains wrote in a blog post.
In Kotlin 2.2, developers can look forward to guard conditions in when-with-subject, multi-dollar interpolation, non-local break and continue, and context parameters.
JetBrains also revealed some language features that will be added to future releases after 2.2, including positional destructuring, name-based destructuring, enhanced nullability, rich errors, must-use return values, and ‘CheckReturnValue.’
Read our top analysis and opinion pieces of 2025 here.