The Widening Gap: React Native's Struggle with Native UI

The Widening Gap: React Native's Struggle with Native UI

Summary

Last year, I tried to predict React Native's future, anticipating widespread adoption of its new architecture, advancements in animation libraries like Reanimated, and a smoother developer experience through React 19 and React Compiler. A year later, significant progress has indeed been made, especially within the thriving Expo ecosystem, which continues to enhance accessibility and visual quality for mobile apps. However, React Native's core faces notable challenges. Integrating React 19 and React Compiler remains problematic, particularly with Android compatibility and conflicts arising from dependencies like Reanimated. Moreover, a fundamental mismatch has emerged between React's imperative virtual DOM approach and the declarative paradigms of modern native UI frameworks like SwiftUI and Jetpack Compose. These differences create severe performance bottlenecks, making it increasingly difficult to maintain native-level fidelity and responsiveness. With platforms like iOS 19 rumored to shift toward SwiftUI-first designs, React Native risks falling further behind unless substantial architectural changes are made. Teams embarking on new projects should carefully evaluate these challenges, as the choice of React Native could significantly impact future maintainability, performance, and user experience.


Last year, I wrote an article about the future of React Native in 2024 and beyond - https://www.adapptor.com.au/blog/whats-next-for-react-native-in-2024. Some of my core predictions were that the new architecture would become the standard, Reanimated would become the de-facto animation library, React 19 and React Compiler would significantly improve the developer experience, and React Native would start converging with React on the web for key API behaviors.

So where are we now, and how are things progressing? Starting with the positives, the Expo ecosystem continues to grow with high-quality packages covering common native APIs, a streamlined development experience, and interesting experiments like React Server Components and Expo DOM Components. Without doubt, the Expo team is doing incredible work to make mobile development more accessible to newcomers. Combined with high-quality libraries like Reanimated, developers can make React Native apps look better than ever.

At the same time, the state of the core React Native framework is becoming increasingly concerning. First, the React 19 and React Compiler transition is rough at the moment. On the web, the React 19 and React Compiler combo relies on numerous assumptions about underlying React and DOM behaviors. Given that native platforms have radically different view systems, this has proven to be a massive challenge for making React 19 features accessible in React Native projects. While you can somewhat use it with iOS currently, the Android implementation remains highly unstable. Even worse, since any complex React Native project now relies on Reanimated for animations, it causes conflicts with React Compiler. If the promise is to "learn once and use it everywhere," React Native's divergence from mainstream React makes this difficult to achieve.

This divergence points to a deeper architectural challenge that lies at the heart of React Native's current struggles: the fundamental mismatch between React's imperative approach to UI updates and the increasingly declarative nature of modern native UI frameworks like SwiftUI and Jetpack Compose.

The Architectural Mismatch: React DOM vs. Modern Native UI Frameworks

React's core philosophy was revolutionary for web development because it introduced a virtual DOM abstraction that handled the complex task of updating the actual DOM efficiently. This works because React's reconciliation engine operates on a set of foundational assumptions about how UI rendering works. It maintains a tree of Fiber nodes that directly correspond to DOM elements, can update properties on these elements directly, persists component instances between renders, and manages its own synthetic event system.

In React Native, this model is preserved through the "Shadow Tree" and "Host View" architecture. JavaScript maintains a virtual representation of the UI, and a native bridge translates these representations into actual UIKit or Android View commands. Native views are then created, manipulated, and destroyed based on these commands. This architecture works because both UIKit and Android Views expose imperative mutation APIs that React Native can leverage to update the UI.

However, modern UI frameworks like SwiftUI and Jetpack Compose operate on fundamentally different principles. These frameworks use a declarative approach where views are lightweight, immutable value types. The framework automatically tracks state dependencies, updates are based on structural identity rather than object references, and the actual UI elements are fully owned and managed by the platform. Crucially, these declarative frameworks have become the standard delivery mechanism for new platform features and capabilities. Apple and Google now prioritize these frameworks for implementing widgets, Dynamic Island interactions, complex shader effects, and advanced spatial interfaces. Platform innovations like VisionOS spatial computing features are primarily exposed through SwiftUI APIs, with legacy frameworks receiving limited support or compatibility layers at best. Similarly, Android's latest Material 3 components and adaptive UI patterns appear first in Compose, with their ViewGroup counterparts often arriving later or with reduced functionality.

In SwiftUI, for example, you never have direct access to the underlying view instances. You can't get a reference to a Text or VStack component to update it imperatively after it's been created. Instead, all updates happen through state changes that trigger re-evaluation of the entire view hierarchy, with the framework intelligently determining what actually needs to change in the underlying UI.

This creates several critical technical challenges when attempting to bridge React's model to these modern frameworks. React Native can't implement imperative property updates for SwiftUI views because there's no way to get a reference to the actual view instance. The layout systems are incompatible, with React Native computing layouts in JavaScript and then applying them to native views, while SwiftUI handles its own layout through an internal system. Component lifecycles, event propagation, and state management models all clash fundamentally between these paradigms.

Performance Limitations of Mixing UI Paradigms

When attempting to bridge the gap between imperative UIKit/Android Views and declarative SwiftUI/Compose, the React Native team faces significant performance challenges. Each time you cross a boundary between these systems, such as embedding a SwiftUI view inside a UIKit container or vice versa, several costly operations occur that impact performance.

The first major issue is rendering pipeline boundary crossings. UIKit components are rendered through the traditional Core Animation layer-based approach with manual layout calculations, while SwiftUI components are rendered through SwiftUI's internal rendering system with its own layout pass and diffing algorithm. In a React Native context, these boundary crossings multiply since React Native itself adds another layer of indirection. Each React Native component might need to compute layout in JavaScript, serialize commands across the bridge, create appropriate native views, manage complex parent-child relationships between different view paradigms, and coordinate state updates across multiple rendering systems.

Memory and resource overhead is another significant concern. When mixing view systems, the same logical UI element might need representations in JavaScript, UIKit/Android Views, and SwiftUI/Compose simultaneously. Hosting controllers and compatibility wrappers consume additional memory, and state might be stored in multiple places to facilitate synchronization. This is particularly problematic on memory-constrained devices where a seemingly simple screen might consume significantly more resources than its native equivalent.

Perhaps the most noticeable issue for users is rendering and layout desynchronization. SwiftUI and UIKit have different layout pass timings and behaviors, their animation systems don't automatically coordinate with each other, and they handle display refresh and high frame rates differently. In practice, this manifests as jittery animations when crossing system boundaries, layout glitches during orientation changes, inconsistent gesture response times, and dropped frames in scrolling performance, particularly in virtualized lists. Implementing a smooth 60fps scrolling list with mixed SwiftUI and UIKit components becomes nearly impossible because the two systems optimize rendering differently.

As mobile platforms advance toward more sophisticated visual effects like blur, materials, and complex animations, these problems compound further. SwiftUI and Compose use newer graphics systems that aren't directly compatible with legacy views. Effects like blur, shadows, and masks don't propagate correctly across system boundaries, and GPU acceleration might be lost at these boundaries. For example, applying a blur effect to a container with mixed UIKit and SwiftUI children becomes extremely difficult or impossible, because the blur effect in SwiftUI operates differently than UIVisualEffectView.

You might say that the React Native core team is trying to fix these issues by mimicking web-style API behaviors on mobile. Indeed, with substantial investment, it may be possible to achieve this by building on top of UIKit and Android Views. Yet it's becoming evident that we are gradually moving toward Compose and SwiftUI-first native ecosystems. The canary in the coal mine for this is how VisionOS is a SwiftUI-first platform. Yes, you can use UIKit to port existing apps, but you are severely limited in using spatial interactions. The latest rumors about iOS 19 indicate that we might see the biggest redesign of the OS since iOS 7, and it's safe to say that most UI updates will be powered exclusively by SwiftUI.

What options do you have as a team trying to build a React-style virtual DOM on top of either Compose or SwiftUI? There's nothing for the diffing engine to mutate directly. There are no persistent node references you can manipulate. All view definitions are ephemeral and disposable—the actual rendered view is managed by the framework behind the scenes. That makes virtual DOM diffing as we know it in React non-viable. The only solution is to utilize compatibility layers that help integrate SwiftUI and Compose views in UIKit and Android views. The biggest issue is that these were designed to be used more on a per-page basis rather than per-component. As a result, a screen with a mix of SwiftUI views, Compose views, and UIKit/Android views will have major performance issues.

Furthermore, the legacy systems and the new declarative systems use distinctly different graphics and composition APIs, making it impossible to apply effects like blur or custom shaders to a view with a mix of SwiftUI and UIKit children. Consequently, even if the React Native core team eventually creates a substantially robust abstraction on top of legacy views, the end result will diverge further and further from a true native experience, requiring the React Native community to reimplement from scratch all the components that SwiftUI developers get for free. At that point, the argument that React Native is more native than, say, Flutter will lose its relevance since both will essentially draw and layout their custom components instead of current native primitives.

Finally, the rumors about VisionOS-like iOS 19 made me curious to evaluate the state of advanced vision effects in React Native. Currently, the go-to way to use blur effects is the Expo Blur library. Overall, it works well on iOS but is almost completely unusable on Android. It's in experimental mode with significant performance impact. The problem is that blur effects have been part of UIKit since iOS 7, yet Android only got performance effects APIs since Android 12+, with around 50% market adoption. React Native has to support 80-90% of Android devices, making it impossible to significantly increase the minimum supported version of Android to adopt these effects.

As a result, if iOS makes a shift toward VisionOS-like UIs, React Native developers will have two choices: either have significantly divergent UIs for iOS and Android, or reduce UI fidelity to a common denominator that will substantially lag behind the fidelity of true native experiences. Given the stability and performance bottlenecks that React Native presents to begin with, it will become more difficult to justify using it for new projects. That would leave us with either Flutter (which itself might have limitations on advanced effects, yet no fundamental architectural constraints on fixing them) or adopting SwiftUI and Jetpack Compose, with AI-assisted coding helping to achieve delivery speed comparable to shipping with cross-platform tools but without sacrificing performance and fidelity.

Overall, it's extremely interesting to see how the React Native core team will handle these challenges going forward. It's not impossible to imagine that developers might eventually start relying more on Flutter-style React Native Skia library for advanced effects within React Native projects. Yet, the fundamental split between where native platforms are heading and what React needs to support a platform might kill the dream of using React powered by truly native primitives underneath. If you're starting a new project, the technical risks described above are an important consideration. If you have an existing project in React Native, buckle up—it's going to be a rough ride.

About me

Elliot Tikhomirov

I am an experienced full-stack software engineer with over 7 years of commercial experience, specialising in .NET, Azure, React, and Flutter development. My expertise spans from architecting enterprise-level applications to implementing cutting-edge AI solutions, always ensuring that technical implementation aligns with business objectives and user needs. As an Azure-certified developer (AZ-204), I bring deep knowledge of cloud services and proficiency in developing cloud-native applications that drive innovation and efficiency.

© 2025 Elliot Tikhomirov

© 2025 Elliot Tikhomirov

© 2025 Elliot Tikhomirov