Python 3.14 – What you need to know

Python 3.14 is right around the corner, and there are 11 clear new Python Enhancement Proposals (PEPs) added to the Python project! There are several over significant changes listed within the official changelog. So, what’s new in Python 3.14?

Python 3.14 brings a whole bunch of useful build improvements, including discontinuation of PGP signatures in PEP 761. Python versions 3.14 and onwards will no longer provide PGP signatures for release artifacts. Instead, Sigstore is recommended for verifiers. Similarly, free-threaded mode (PEP 703), which was introduced in the previous version, 3.13, has undergone major enhancements. The implementations outlined in this enhancement are now complete, incorporating updates to the C API, and the interpreter’s earlier temporary fixes have been replaced with long-term solutions.

The Cloudsmith team is really excited about this release and everything that comes with it! Note that some of the statuses may change between the time of this blog being posted and the final release date. Knowing that, let’s jump into all of the new features and improvements in Python 3.14.

Python 3.14 – Editor’s pick:

Here are a few of the changes that Cloudsmith employees are most excited about in this release:


PEP 761: Discontinuation of PGP signatures

“From my perspective at Cloudsmith, the move away from PGP signatures in Python is a welcome change. PGP has long struggled with usability and the burden of managing long-lived private keys, and while it was the default for artifact signing, it never felt ideal. Sigstore offers a much more practical approach, using short-lived keys tied to human-readable identities, and it’s already gaining traction across ecosystems like PyPI, NPM, Homebrew, and GitHub. We’re also seeing projects we work with, like Docker and Cloudsmith, embracing Sigstore, which reinforces that this shift is a real step forward for security and usability.”

Nigel Douglas - Head of Developer Relations


PEP 734: Multiple interpreters in the stdlib

“This was an inevitable development, in my opinion. While the CPython runtime has supported running multiple interpreters, which are independent copies of Python within the same process, for well over 20 years at this point, the feature was still limited to the C-API and thus saw little adoption due to low awareness and the absence of a standard library module. With Python 3.14 introducing the new concurrent.interpreters module, this long-standing capability is finally becoming more accessible, marking a significant step toward mainstream usage. While current limitations remain, their impact will diminish as CPython evolves and the community contributes solutions via PyPI packages.”

Shane Dempsey - Senior Software Engineer


PEP 784: Adding Zstandard to the standard library

“Python 3.14’s addition of Zstandard (via compression.zstd) is an exciting upgrade because it brings one of the fastest, most efficient modern compression algorithms directly into the standard library, alongside a new unified compression.* namespace. Zstandard, created by Facebook, consistently outperforms older formats like zlib, bzip2, and LZMA by offering both higher compression ratios and dramatically faster decompression speeds, which has made it the de facto standard across filesystems, packaging tools, and industry at large.

By including it natively, Python eliminates the confusion of multiple third-party bindings, enables direct support in core modules like tarfile and zipfile, and opens the door for faster, smaller Python wheels and other packaging improvements. The reserved compression.* namespace not only avoids naming conflicts with PyPI packages but also creates a clean, future-proof home for compression libraries, much like hashlib does for hashing. This change reflects Python’s “batteries included” philosophy while giving developers a state-of-the-art compression tool that is both practical today and well-positioned for long-term usage.”

Meghan McGowan - Product Marketing Manager


New features in Python 3.14

PEP 779: Free-threaded Python is now officially supported

Status: Accepted
Created: 13th March 2025

The Python Steering Council has accepted this enhancement proposal, which actually sets requirements for advancing PEP 703, which was the larger, ongoing effort to make Python’s Global Interpreter Lock (GIL) optional. PEP 703 outlines a three-phase plan:

  • First phase introduces an experimental GIL-free build in Python 3.13
  • Phase two makes this build officially supported - though still optional
  • While Phase 3 makes it the default.
    This new feature defines the expectations for the second phase, ensuring the free-threaded build is stable, usable, and performant enough for broader adoption.

The motivation behind this shift is to balance the benefits of free-threaded Python, which offers greater concurrency, reduced latency, and expanded threading capabilities against costs such as performance penalties, memory overhead, and ecosystem complexity. Current benchmarks show around a 10% slowdown and 15-20% more memory use compared to the traditional GIL build, both considered acceptable trade-offs for Phase 2.

The APIs introduced so far have proven stable, and further refinements are expected without breaking changes. With growing third-party support and clear criteria for performance, memory use, documentation, and API stability, Python 3.14 is targeted as the point where free-threaded Python becomes an officially supported build.

PEP 734: Standard Library Support for Isolated Python Interpreters

Status: Final
Created: 6th November 2023

Python 3.14 sees the addition of a new standard library module, interpreters, to expose CPython’s long-standing support for multiple interpreters in a single process. Each interpreter is strictly isolated, with its own modules, classes, and variables, while sharing only certain immutable or low-level process state. Unlike threads, which share most of the same runtime state, interpreters provide stronger isolation, making them better suited for parallelism, especially now that PEP 684 introduces a per-interpreter GIL.

The new module will provide a high-level interface for creating, inspecting, and executing code in interpreters, as well as managing communication through a built-in, cross-interpreter-safe Queue. This brings powerful functionality, previously limited to the C API, into Python code. An additional concurrent.futures.InterpreterPoolExecutor will be provided to simplify concurrent workloads.

The interpreters module defines an Interpreter object with methods like exec() to run code, call() to execute functions, and prepare_main() to initialise globals. Interpreters can safely exchange data using interpreters.Queue, which supports pickle-based transfers and special handling for buffer objects like memoryview. This queue acts as a synchronisation and communication mechanism, enabling patterns like worker coordination across interpreters. For example, the below snippet taken from the PEP documentation shows multiple interpreters sharing a memoryview of large data, synchronised with a queue token:

import interpreters
from mymodule import load_big_data, check_data
import threading

numworkers = 10
control = interpreters.create_queue()
data = memoryview(load_big_data())

def worker():
    interp = interpreters.create()
    interp.prepare_main(control=control, data=data)
    interp.exec("""if True:
        from mymodule import edit_data
        while True:
            token = control.get()
            edit_data(data)
            control.put(token)
        """)

threads = [threading.Thread(target=worker) for _ in range(numworkers)]
for t in threads: t.start()

token = 'football'
control.put(token)
while True:
    control.get()
    if not check_data(data):
        break
    control.put(token)


PEP 758: Making Parentheses Optional in Except and Except Blocks

Status: Final
Created: 30th September 2024

By allowing un-parenthesised except and except* blocks in Python, users can now catch multiple exception types, so long as the “as” operator clause is not used. Currently, parentheses are mandatory around multiple exception types, a requirement carried over from Python2.

Removing this requirement simplifies the syntax, improves readability, and makes it more consistent with other comma-separated constructs in Python, such as function arguments and tuple literals. Parentheses will still be required when using as to capture the exception instance to avoid ambiguity. The change is purely syntactic, fully backwards compatible, and does not alter exception handling semantics.

PEP 649: Deferred Evaluation Of Annotations Using Descriptors

Status: Accepted
Created: 11th January 2021

Python annotations let developers attach type information and metadata to functions, classes, and modules. Traditionally, annotations were evaluated eagerly when the object was defined, which led to common problems with forward references and circular dependencies. PEP 563 attempted to fix this by “stringizing” annotations (storing them as strings), solving the forward-reference issue but breaking runtime use cases where actual objects were expected instead of strings.

A new proposal introduces a third, more flexible approach: lazily evaluating annotations through a dedicated __annotate__ method. At compile time, Python generates a helper function that constructs the annotation dictionary, then attaches it to the object as __annotate__. Accessing these annotations triggers this function, evaluates the annotations only when needed, and caches the results. This design eliminates circular-reference issues, preserves runtime usability, and allows for new introspection capabilities. Furthermore, inspect.get_annotations and typing.get_type_hints gain a new format parameter, letting users choose between evaluated values, forward-reference proxies, or source-code strings for annotations.

Here’s a simplified illustration of how this lazy evaluation model works:

class function:
    # __annotations__ becomes a data descriptor that calls __annotate__
    @property
    def __annotations__(self):
        return self.__annotate__()
# Compiler generates an annotation function
def annotate_foo():
    return {'x': int, 'y': MyType, 'return': float}

def foo(x=3, y="abc"):
    ...
foo.__annotate__ = annotate_foo
class MyType:
    ...
# Annotation access happens *after* MyType exists
foo_y_annotation = foo.__annotations__['y']
print(foo_y_annotation)  # <class '__main__.MyType'>

With this change, annotations retain their runtime meaning while still avoiding forward-reference pitfalls, which supersedes the enhancement #563 string-based approach mentioned earlier, as well as offering a unified solution for both static and runtime use cases.

PEP 768: Safe external debugger interface for CPython

Status: Accepted
Created: 25th November 2024

This introduction of a zero-overhead debugging interface for CPython should allow debuggers and profilers to safely attach to running Python processes. By providing well-defined safe execution points within the interpreter’s evaluation loop, external tools can request code execution without modifying normal execution or introducing runtime overhead.

This mechanism enables scenarios like attaching pdb to live processes by PID, similar to gdb -p, allowing developers to inspect, evaluate, and step through running Python code interactively without stopping or restarting applications. Current approaches to live debugging in Python are risky because they rely on unsafe code injection, which can execute at arbitrary points in the interpreter, leading to crashes, memory corruption, or deadlocks. By integrating a small support structure into each thread state and extending the debug offsets table, this proposal ensures that debugging code is executed only when the interpreter is in a consistent and safe state.

The interface works by having debuggers write control information, such as the path to a Python script, into dedicated memory locations in the target process. The interpreter checks for pending debugger requests at existing safe points (via the eval_breaker), executing the provided code without interrupting critical operations like memory allocation or garbage collection.

A new sys.remote_exec(pid, script) API simplifies remote execution for external tools, while environment variables and build flags allow redistributors and administrators to disable the feature if desired. Multi-threaded considerations are handled naturally: injected code runs under the normal GIL, ensuring thread safety, and can target individual threads or all threads as needed. This approach aligns Python with other major languages in live debugging capability while maintaining safety and zero runtime overhead during normal execution.

PEP 761: Deprecating PGP signatures for CPython artifacts

Status: Active
Created: 8th October 2024

Again, this PEP isn’t merely a deprecation announcement. Instead, we see the introduction of an entirely new security model through Sigstore. Since Python version 3.11, CPython artifacts have been signed with both PGP and Sigstore. PGP relies on long-lived private keys, which release managers must maintain and protect for many years, creating ongoing risks and burdens. In contrast, Sigstore uses short-lived keys tied to human-readable identities via OpenID Connect, greatly simplifying the signing process while providing stronger ergonomics. Sigstore is already gaining wide adoption across ecosystems like PyPI, NPM, Homebrew, and GitHub.

This new feature will not only deprecate, but will eventually discontinue the PGP signatures for CPython altogether, transitioning fully to Sigstore. Current stable releases (3.11 through 3.13) will continue offering PGP signatures until end-of-life, but new release managers (starting with Python 3.14) will not be required to generate them. Maintaining dual systems has slowed adoption of newer methods, creating a “Gordian knot” that discourages verifiers from moving forward.

By dropping PGP, CPython can encourage downstream ecosystems to fully adopt Sigstore, while still providing flexibility for extraordinary delays if needed. Though this change shifts reliance to Sigstore’s centralised infrastructure, it aligns with the security practices already expected of release managers and leverages services better suited for long-term key management. Documentation will guide integrators, such as Linux distros and package managers, on verifying artifacts with Sigstore.

PEP 765: Disallow return/break/continue that exit a finally block

Status: Final
Created: 15th November 2024

Python is now able to disallow return, break, and continue statements that escape from a “finally” block, reviving the idea from PEP 601 (which was ultimately rejected) but now supported by real-world data. The motivation comes from the surprising and often dangerous semantics: a return inside finally overrides any return from the try, and more critically, such statements can swallow exceptions entirely. While the 601 enhancement proposal was rejected in 2019 as a style issue to be handled by linters, current evidence shows that these constructs are extremely rare in real code, almost always incorrect, and easily fixed when flagged. Popular linters already discourage the pattern, and developers typically welcome removing it.

The rationale builds on precedent: PEP 654 already forbids return, break, and continue in except* clauses due to harmful semantics, showing that restricting orthogonality is acceptable when it prevents subtle bugs. This PEP states that the Python compiler may issue a SyntaxWarning or SyntaxError when such control flow escapes a finally. CPython will begin with a SyntaxWarning in 3.14, leaving open the possibility of upgrading to SyntaxError later, while allowing other implementations to adopt stricter enforcement.

Example of disallowed code:

def f():
    try:
        ...
    finally:
        return 42  # May emit SyntaxWarning or SyntaxError

Permitted case (the control flow is actually inside, not escaping, the finally):

try:
    ...
finally:
    def f():
        return 42  # Allowed

This change nudges developers away from a confusing, bug-prone feature, reducing exception-swallowing risks while aligning with established best practices.

PEP 784: Adding Zstandard to the standard library

Status: Final
Created: 6th April 2025

Zstandard (zstd) is a modern, high-performance compression algorithm that has quickly become the de facto standard across many domains. It offers much higher compression ratios than zlib or bzip2, while still decompressing significantly faster than LZMA. Its adoption is widespread, backed by hardware implementations, integration in filesystems like ZFS and Btrfs, and formal IETF standardisation (RFC 8478). Given this maturity, efficiency, and long-term stability, this PEP proposes adding Zstandard support to Python’s standard library.

The new module will live under a compression namespace (compression.zstd) alongside re-exports of existing compression libraries (compression.zlib, compression.lzma, compression.bz2, compression.gzip). This avoids name collisions with existing PyPI packages and provides a unified location for compression tools, similar to how hashlib centralises hashing algorithms.

The compression.zstd module will provide familiar APIs, such as compress(), decompress(), ZstdFile, and incremental compressor/decompressor classes, along with Zstandard-specific features like dictionary training. The implementation is based on the well-maintained pyzstd project, with libzstd as an optional CPython dependency.

A practical benefit is enabling Zstandard in modules like tarfile and zipfile, resolving long-standing user requests. It may also improve Python package distribution (for example, wheels), as benchmarks show zstd can reduce file sizes by 30-40% and improve extraction speeds by over 2× compared to zlib.

Example usage:

try:
    from compression.zstd import compress, decompress
except ImportError:
    raise RuntimeError("Zstandard support not available")

data = b"Python loves fast compression!" * 10
cdata = compress(data)
print("Compressed size:", len(cdata))

restored = decompress(cdata)
print("Matches original:", restored == data)

This new feature both modernises the existing Python “batteries included” compression ecosystem and also lays the groundwork for future improvements in package distribution, archiving, and data handling.

PEP 741: Python configuration C API

Status: Final
Created: 18th January 2024

This new feature extends PEP 587 (Python Init Config) by introducing a higher-level, future-proof C API for configuring Python initialisation without exposing internal C structures. The new API simplifies embedding Python by unifying pre-initialisation and initialisation into a single interface and removing the distinction between “Python” and “Isolated” modes.

It also introduces functions like PyConfig_Get() and PyConfig_Set() to allow querying and modifying runtime configuration, something missing from the previous enhancement proposal, which only handled initialisation. To complete the PEP 587 design, this new feature also adds PyInitConfig_AddModule(), a modern replacement for the old “inittab,” to register built-in extension modules. The existing lower-level PyConfig API remains available for projects that need closer coupling to CPython’s internals.

The motivation comes from several pain points. Currently, there is no public API to inspect runtime configuration, leaving tools like CPython or developers relying on deprecated global variables without alternatives. This gap became more pressing after deprecations in Python 3.12 and discussions around security fixes, where ABI concerns limited what could be added to PyConfig in stable branches. By introducing a flexible API decoupled from direct struct access, Python gains room to evolve its configuration model without breaking ABI compatibility.

This new feature should also address the redundancy between PyPreConfig and PyConfig, which duplicate fields unnecessarily, and provides a clearer embedding story for applications such as Blender, GIMP, LibreOffice, and vim. These applications often depend on either system-installed Pythons or bundled versions in container formats (Flatpak, AppImage, Snap). A stable, extensible API would give them better control over runtime configuration and make embedding Python more robust and forward-compatible.

PEP 776: Emscripten Support

Status: Draft
Created: 18th March 2025

Emscripten has been added as a Tier 3 officially supported platform in Python 3.14, following Steering Council approval in October 2024. While still in draft stage and not guaranteed for inclusion in the final release, this effort has already resulted in more than 25 bug fixes in Emscripten’s libc and introduced support for ctypes, termios, and fcntl, along with experimental PyREPL integration. Emscripten itself is a full open-source compiler toolchain that compiles C/C++ into WebAssembly or JavaScript, enabling Python to run inside browsers and JavaScript runtimes like Node.js. Rust also maintains an Emscripten target, highlighting the platform’s broad adoption.

Pyodide, which has supported Emscripten Python since 2018, has been widely adopted in education and interactive documentation, demonstrating both the maturity and practical importance of the platform. While Pyodide distributes its builds via GitHub, npm, and jsDelivr, Emscripten Python is currently build-only and not pre-distributed. Importantly, Emscripten (alongside WASI) is one of the few supported platforms offering meaningful sandboxing, which helps reinforce its role in making Python more accessible and secure in the browser, the most universal computing environment across desktops and mobile devices.

Additional Improvements in Python 3.14

A new type of interpreter
Contribution:
gh-128563
Status:
Closed

A new interpreter has been added to CPython that replaces the traditional large C switch statement with tail calls between small C functions implementing individual Python opcodes. On supported compilers, this design can deliver notable performance improvements (up to 30% in some cases), with a more conservative geometric mean speedup of 3-5% on pyperformance (Python Performance) benchmarks when compared to Python 3.14 built with Clang 19.

Currently, the interpreter works only with Clang 19+ on x86-64 and AArch64, though GCC support is expected in the future. The feature is opt-in and best used with profile-guided optimisation, as this is the only tested configuration. Importantly, this change is an internal implementation detail that does not alter Python’s visible behaviour. Earlier reports of 9-15% speedups were revised downward after a Clang/LLVM 19 compiler bug was discovered, which had skewed results.


Syntax highlighting in PyREPL
Contribution:
gh-131507
Status:
Closed

The default interactive Python shell now provides syntax highlighting as you type, enabled by default unless the PYTHON_BASIC_REPL or other colour-disabling environment variables are set. Its default theme uses 4-bit VGA standard ANSI color codes for maximum compatibility and good contrast, but can be customised through the experimental _colorize.set_theme() API, which can be invoked interactively or in the Python startup script.


Binary releases for experimental JIT compiler
Contribution:
pep-744
Status:
Closed

The latest macOS and Windows release binaries now ship with an experimental Just-In-Time (JIT) compiler, which can be enabled by setting the PYTHON_JIT=1 environment variable or, for downstream builds, using --enable-experimental-jit=yes-off. Still in active development, the JIT may deliver performance ranging from 10% slower to 20% faster depending on workload.

To support evaluation, Python provides sys._jit introspection functions: is_available() checks if JIT support exists in the current executable, while is_enabled() confirms whether it is active for the running process. However, some limitations remain, most notably, native debuggers and profilers such as gdb and perf cannot unwind through JIT frames (though Python tools like pdb and profile continue to work), and free-threaded builds do not support JIT compilation.


Concurrent safe warnings control
Contribution:
gh-130010
Status:
Merged

In recent updates, the warnings.catch_warnings context manager can now rely on a context variable to handle warning filters. This feature is activated through the context_aware_warnings flag, which can be toggled via the -X command-line option or an environment variable. By doing so, applications gain more consistent and isolated control over warnings, particularly in environments with multiple threads or asynchronous execution. The default setting depends on the build: it’s enabled by default for the free-threaded variant, and disabled for the GIL-enabled one.

From a security standpoint, this matters because warnings can influence how developers detect and respond to potential misuses of APIs, deprecated features, or unsafe behaviours. In concurrent or async-heavy applications, non-deterministic warning handling could conceal important signals or leak them across execution contexts. Having thread- or task-local control reduces the risk of warnings being missed, suppressed unintentionally, or exposed to the wrong consumer, all of which strengthens the reliability of diagnostic and security tooling.


Incremental garbage collection
Discussion Link
: here

The cycle garbage collector is now incremental, which drastically reduces pause times (by an order of magnitude or more) on fairly large heaps. Python has also simplified its memory model to just two generations, young and old. Instead of collecting entire generations, the GC now runs less frequently and, when triggered, collects the young generation plus a small increment of the old one. This also changes the behaviour of gc.collect(1), which now performs an incremental step instead of collecting generation 1, though other gc.collect() calls remain the same.

This is an exciting development for developers because it makes Python’s garbage collection far more predictable and less disruptive for applications with large memory footprints, such as data processing, machine learning, or high-performance AI web services. Shorter, incremental pauses mean smoother performance, fewer latency spikes, and better scalability for memory-intensive workloads.


Asyncio introspection capabilities
Feature Link:
Call Graph Introspection

A new command-line interface has been introduced to make inspecting running Python processes easier and more transparent, specifically for applications using asynchronous tasks. With the command administrators and developers can examine the given process ID (PID) and view a detailed table of all active asyncio tasks.

python -m asyncio ps PID

The output should provide a flat listing of task names, their coroutine stacks, and the dependencies between tasks. For a more visual perspective, the companion command renders the same information in a hierarchical call tree, mapping coroutine relationships and highlighting how tasks interact.

python -m asyncio pstree PID

This view is particularly valuable for diagnosing long-running or stuck asynchronous programs, as it helps pinpoint which tasks are blocking progress, which are awaiting results, and how execution flows across coroutines.

From a security and operational standpoint, these tools offer significant advantages to the community. They provide greater visibility into running Python processes, reducing the risk of “black box” async applications that are difficult to audit or debug in production. By exposing exactly what tasks are active and where execution may be stalled, administrators can more confidently detect anomalies such as runaway tasks, unexpected coroutine chains, or processes behaving outside their expected design.

This transparency helps identify performance bottlenecks before they escalate into outages, supports faster root cause analysis during incidents, and ensures safer operation of async-based services in shared or multi-tenant environments. In short, these commands empower teams with actionable insights into runtime behaviour, strengthening both reliability and trust in Python-based systems.


Timeline of v.3.14 Python Release

As always, it’s recommended that maintainers of third-party Python packages begin preparing for Python 3.14 during the release candidate 3 stage. Publishing Python 3.14 wheels to PyPI ahead of the official 3.14.0 release will ensure compatibility and make it easier for other projects to test their own code.

Binary wheels created with Python 3.14.0 release candidates will remain compatible with future 3.14 versions. If you encounter any problems, please report them, before and after the 3.14 release on October 7th through the official Python issue tracker → https://github.com/python/cpython/issues

Python v3.14 Release Information
DateRelease StageExplanation
17 June 20253.14.0b3 (Beta 3)Third beta release. Betas are for testing new features before the release candidate phase. No new features are added after betas, only the bug fixes.
14 August 20253.14.0rc2 (Release Candidate 2)Second release candidate. Only bug fixes allowed. Released earlier than planned due to a .pyc bytecode magic number change (rc1 .pyc files incompatible with rc2). ABI remains stable.
16 September 20253.14.0rc3 (Release Candidate 3)The final planned release candidate. Only reviewed bug fixes are permitted. Used for last-minute tests before final release.
7 October 20253.14 (Final Release)Official, stable release of Python 3.14. Production-ready and suitable for all users.
Python v3.14 Release Information


Release Notes: https://docs.python.org/3.14/whatsnew/3.14.html


Read more on
Keep up to date with our monthly newsletter

By submitting this form, you agree to our privacy policy