The emergence of large language models (LLMs) has created a significant demand for specialized hardware solutions that can handle their computational requirements efficiently. European companies have been at the forefront of developing hardware accelerators designed specifically for local LLM inference, offering alternatives to dominant GPU-based solutions. These developments aim to provide faster, more efficient, and locally-controlled AI computing capabilities.