LG Innotek Next-Gen Under-Display Driver Monitoring Camera at CES 2026

  • Thread Author
LG Innotek’s new under-display camera module promises to hide driver-facing cameras behind the instrument cluster display while claiming near‑parity with unobstructed cameras — a development that could reshape cabin design, driver privacy debates, and the supplier race to meet tightening regulatory demands for driver monitoring systems (DMS).

Background / Overview​

Driver Monitoring Systems (DMS) are moving from optional comfort features to mandated safety systems in many regions. Regulators and NCAP programs have tightened requirements that push automakers to deploy reliable in‑cabin sensing that can detect drowsiness, gaze, and distraction; the European General Safety Regulation (GSR) and related implementing rules now require more advanced DMS capabilities for new vehicles, with the most stringent rules phasing in for all new vehicles on or after July 7, 2026. At the same time, OEMs and interior designers are prioritizing clean, minimalist cockpits: visible, protruding cameras are perceived as aesthetic compromises and raise privacy concerns for buyers. That design tension — between regulatory/functional needs and interior elegance — is the market opportunity LG Innotek is targeting with its Next‑Generation Under‑Display Camera (UDC), which it says will be shown at CES 2026 (Jan. 6–9, 2026). This article explains what LG Innotek announced, the technical challenges behind under‑display driver cameras, how the company says it solved them, and what the move means for automakers, suppliers, regulators, and drivers. It evaluates strengths and risks, and outlines what to watch for as the product moves toward OEM qualification.

What LG Innotek announced​

LG Innotek describes its product as the industry's “Next‑Generation Under‑Display Camera (UDC)” designed to be mounted behind the instrument cluster display to perform DMS tasks while remaining invisible to occupants. The company emphasizes three headline claims:
  • The UDC is fully hidden behind the dashboard display, enabling a seamless, uninterrupted instrument cluster and cleaner interior design.
  • Image fidelity after LG Innotek’s AI image restoration processing is "at least 99%" compared to an unobstructed camera mounted in front of the driver, addressing a commonly reported 20–30% image degradation observed in earlier UDC implementations. LG Innotek attributes the improvement to a combination of optical engineering, joint development with LG Display, and proprietary AI deblur/denoise algorithms.
  • The UDC is part of a broader vehicle sensing strategy — LG Innotek positions the module among a portfolio that will include camera modules, LiDAR, and radar, backed by strategic moves such as a partnership and equity injection with Aeva for FMCW LiDAR and a small equity stake in Korean 4D radar specialist Smart Radar System.
Those claims are being presented to OEMs and the press ahead of the public CES 2026 unveiling; most technical details beyond the high‑level performance numbers remain company‑provided.

Why under‑display driver cameras are hard​

The optics and physics problem​

A camera placed behind an active display must capture light that has passed through display layers — pixel structures, subpixel slits, polarizers, conductive traces, adhesive layers and cover glass. The display both reduces overall light transmission and introduces diffraction patterns, color shifts, and mid‑frequency loss in the spatial spectrum of the image. Researchers and patent filings have documented that diffraction and slit‑pattern effects can create frequency gaps that deconvolution cannot restore by simple linear filtering; when information is lost at certain spatial frequencies, conventional inverse filters fail. Machine‑learning methods are therefore often proposed to hallucinate or reconstruct missing mid‑frequency details using learned priors.

The practical automotive constraints​

Automotive DMS must function across more extreme and variable lighting than smartphones: sunlight through side windows, backlit faces, low‑lux night driving, rapid head movements, eyeglass reflections, and varying seating positions. Automotive grade cameras must also endure higher thermal ranges, vibration, and long lifetimes. Solving display‑induced degradation for a consumer smartphone (where demo constraints and controlled lighting can be assumed) is one thing; delivering robust DMS performance at grade across millions of vehicles is considerably harder.

How LG Innotek says it solved the problem​

LG Innotek’s public statements emphasize a two‑pronged approach:
  • Optical and mechanical co‑design with LG Display to optimize the display panel’s pixel architecture and light transmission in the camera’s field of view. That collaboration suggests panel engineering to create a “window” of higher transmittance or modified pixel routing above the camera location, reducing diffraction effects while preserving display uniformity.
  • Proprietary AI image restoration software that applies deblur and denoise algorithms to recover degraded image information caused by the display. The company claims the combined solution recovers image fidelity to at least 99% compared with an unobstructed camera. LG Innotek positions this software as the differentiator that enables invisible DMS without a meaningful performance penalty.
These are plausible technical routes: patents and prior art show that machine learning models trained on paired degraded/clean images are capable of reconstructing missing spectral components to a degree, and co‑design between display makers and camera module suppliers can reduce some diffraction/noise sources. But independent, third‑party validation of the 99% figure — across lighting, seating, and eyewear conditions — was not published in the announcement and remains to be verified during OEM qualification testing.

What this means for OEMs and interior design​

Aesthetic and UX implications​

  • Hidden DMS cameras allow designers to create uninterrupted instrument displays and cleaner dash surfaces, supporting premium interior aesthetics and the “invisible tech” trend in luxury vehicles. LG Innotek is positioning the UDC specifically for high‑end models where interior differentiation matters.
  • An invisible camera reduces the social and psychological friction of being watched: some buyers cite visible in‑cabin sensors as intrusive. Concealment may improve user acceptance, provided the system is transparent about what it records and how that data is handled.

Integration and systems design​

  • OEMs will want the restored frames to be not only visually pleasing but also machine‑accurate for the DMS software stack (gaze estimation, blink detection, head pose). If the AI restoration introduces latency, artifacts, or hallucinated features that bias the DMS classifier, safety performance could degrade despite better looking images.
  • DMS is increasingly part of a fused sensing stack. LG Innotek’s LiDAR partnership with Aeva and equity in Smart Radar System imply a roadmap toward sensor fusion — cameras, FMCW LiDAR, and 4D radar — to provide redundancy and cross‑validation for both in‑cabin and exterior perception. That strategy matches industry trends toward multi‑modal sensing for safety‑critical systems.

Market timing and regulatory context​

  • Regulatory momentum is real and time‑sensitive. The EU GSR and related Euro NCAP protocols increase the urgency for certified DMS solutions; OEMs selling in Europe will need compliant solutions for new vehicles after July 7, 2026. That deadline elevates demand for ready‑qualified DMS hardware and software that can be integrated with minimal recertification risk.
  • CES 2026 will serve as the public launch platform; OEMs attending or scouting new tech at CES can begin evaluations quickly during the 2026 model year engineering cycles. CES (Jan 6–9, 2026) is the practical rendezvous point for suppliers and automaker teams to preview the module before formal design wins and qualification.

Strengths and strategic advantages​

  • Aesthetics + Acceptance: The invisible installation directly answers a real design and consumer‑acceptance problem. For luxury and EV interiors where display real estate and visual minimalism are differentiators, the UDC could be attractive.
  • End‑to‑end supplier positioning: LG Innotek is not just pitching a camera; it’s rolling the product into a broader sensing portfolio (cameras, LiDAR, radar) and is securing partnerships and stakes (Aeva, Smart Radar System) to build a vertically coordinated offering. That makes LG Innotek a more strategic partner for OEMs seeking consolidated supply relationships.
  • Leveraging display know‑how: Co‑development with LG Display is a pragmatic way to reduce display‑related degradation before algorithmic remediation — a classic systems‑engineering approach that increases the odds of consistent field performance.

Risks, unknowns, and caveats​

  • Company‑reported performance needs independent validation. The “99% image fidelity” and “~30% degradation” baseline are statements in LG Innotek’s release and related press coverage; there is no publicly available independent test report or OEM qualification data included in the announcement. These figures should be treated as vendor claims until third‑party testing or OEM evaluation data are published. Caveat emptor.
  • Operational robustness across lighting and occupants. Automotive DMS must operate reliably with eyewear, hats, low light, strong side sunlight, and rapid head motion. AI restoration introduces a dimension of algorithmic failure modes (hallucination, mis‑reconstruction) which — if unmitigated — could yield false negatives (missed drowsiness) or false positives (spurious alerts), both of which are dangerous in safety contexts. Testing across edge cases is essential.
  • Latency and compute cost. If restoration is compute‑intensive and performed on a central domain controller rather than at the camera sensor edge, the extra data flow and latency could complicate functional safety and cybersecurity design. OEMs will want details about where processing occurs (ISP, camera SoC, cockpit domain controller), determinism guarantees, and power/thermal budgets for production environments.
  • Privacy, data handling, and regulation. Hiding the camera behind a display may reduce user discomfort but does not absolve data‑privacy obligations. Regulators and consumer advocates increasingly require clear consent, in‑vehicle data isolation, and data‑minimization. OEMs will need architectures that keep raw video local, provide on‑device models, and give occupants visibility/control over recording and retention. The “invisible” nature of the camera could be politically sensitive if consumers feel surveilled without obvious cues.
  • Manufacturing yield and supply chain. Integrating a camera behind a display requires cross‑supplier coordination during panel assembly. Yield losses, repairability issues, and aftermarket replaceability are practical concerns that can delay production ramps or increase warranty exposure.
  • Competition and defensive plays. Established DMS specialists (Seeing Machines, Smart Eye, Valeo, Continental, Tobii, others) already have validated stacks and design‑wins. OEMs may prefer proven, certifiable solutions rather than a novel under‑display approach until it is demonstrably mature under automotive qualification cycles. The industry is also moving toward combined ADAS + DMS “single box” solutions that reduce cost and weight; UDCs must show cost parity or compensate via added value.

Technical questions OEMs and engineers should ask now​

  • Where is the AI restoration executed — camera module SoC (edge) or cockpit domain controller (central)? What are the resource and latency metrics?
  • What is the method and dataset used to compute the “99% image fidelity” claim? Does the test set include sunglasses, hats, infants, side window glare, and nighttime scenarios?
  • How does the system degrade gracefully — e.g., does the module detect when display content or ambient geometry reduces SNR below safety thresholds and then escalate to a fallback (visible camera or alternative sensors)?
  • What are the failure modes and false‑positive/false‑negative rates for DMS inference after restoration, and how do they compare to an unobstructed reference across duty cycles?
  • How is in‑vehicle privacy preserved? Where are frames stored, and is any data broadcast off‑vehicle? Are there on‑device anonymization or ephemeral‑storage policies?

Broader industry implications​

  • The UDC announcement arrives at a point where the in‑cabin sensing market is forecast to grow rapidly as regulators force DMS adoption and automakers demand integrated sensing for higher‑level automation. LG Innotek, citing market research, positions the in‑cabin camera module market as expanding rapidly over the next decade; the company’s PR references S&P Global estimates for long‑term market growth. These projections underscore both the commercial opportunity and the competitive pressure to field specialized modules that match OEM design and validation timelines. OEMs will assess whether under‑display modules are best‑of‑breed or a design compromise.
  • Consolidation of sensing suppliers is underway: partnerships with LiDAR (Aeva) and investments in radar (Smart Radar System) reflect a strategic pivot to multi‑modal sensing portfolios. Suppliers that can bundle cameras, LiDAR, and radar — or provide validated sensor fusion stacks — will have a commercial advantage with automakers seeking simplified procurement and integration roadmaps.
  • The technical route LG Innotek is pursuing — display co‑design plus AI restoration — mirrors approaches in consumer electronics where display companies and camera suppliers co‑engineer under‑display selfie cameras. But the automotive bar for reliability and lifecycle is significantly higher than consumer demos, and the industry will treat vendor claims skeptically until OEM test data and functional safety evidence are available.

A realistic rollout timeline (what to expect)​

  • Short term (now–mid 2026): Demonstrations, engineering evaluations, and early OEM interest. Public showing at CES 2026 will target visibility and early technical conversations. OEMs with tight European market timelines may evaluate the UDC for 2027–2028 model introductions, but qualification is likely to require several OEM‑specific engineering cycles.
  • Medium term (2026–2028): Prototype validation with pass/fail trials across lighting, camera aging tests, thermal cycling, and functional safety verification. Integration choices (edge vs. central processing) will be decided; suppliers will likely propose reference architectures for OEMs. If cross‑supplier coordination (display + camera + ECU) goes smoothly, limited design wins for premium models are plausible in the 2028–2029 timeframe.
  • Long term (2028+): Wider adoption if the UDC demonstrates stable, verifiable DMS performance. The broad market for in‑cabin sensing is expected to expand, favoring suppliers who can meet cost, reliability, and data governance expectations.

Recommendations for industry stakeholders​

  • Automakers: Treat UDCs as candidate options for premium trims; require independent validation, insist on explainable performance metrics (FP/FN rates for drowsiness/gaze), and demand privacy‑by‑design (on‑device inferencing, no cloud uploads) and clear occupant notification mechanisms.
  • Tier‑1s and software integrators: Evaluate UDCs early in the DMS stack to understand how restoration artifacts might affect downstream inference and to plan sensor fusion fallback strategies (IR cameras, radar occupancy sensing).
  • Regulators and NCAP organizations: Continue to emphasize functional test cases under diverse real‑world conditions and require transparent reporting of algorithmic limits to ensure safety.
  • Consumers and privacy advocates: Seek clarity on when in‑cab imagery is stored, how long it is retained, and what control occupants have over data collection and use.

Conclusion​

LG Innotek’s Next‑Generation Under‑Display Camera is a bold technical and commercial play that seeks to reconcile two strong automotive trends: the regulatory push for effective driver monitoring and the design imperative for clean, minimalist interiors. The co‑development with LG Display and the use of AI image restoration are sensible technical choices, and the company’s broader sensing partnerships show strategic ambition. However, the most consequential claims — notably the 99% image fidelity figure and the resolution of an alleged ~30% baseline degradation — are company statements that require rigorous, independent validation across the demanding lighting and environmental envelope of real vehicles. OEMs will rightly subject any under‑display DMS to exhaustive testing before awarding production design wins, and functional safety, latency, privacy, and manufacturing yield issues will be critical gatekeepers.
If LG Innotek’s UDC proves robust in OEM qualification, it could accelerate a new interior design language while preserving or improving DMS safety performance. If it underperforms in edge cases or adds complexity to the safety stack, OEMs will either ask for hybrid solutions (a hidden camera plus fallback sensors) or delay adoption until the technology matures. The next six to eighteen months — starting with CES 2026 demonstrations and early OEM tests — will determine whether invisible driver monitoring becomes the norm or remains an interesting niche pursued by a few premium models.
Source: TechPowerUp LG Innotek Unveils Next-Generation Under-Display Camera