EU investors including Bosch, SAP pump $500 million into … – The Stack

Germanys Aleph Alpha has raised over $500 million in a Series B round backed by Bosch and SAP among other new investors, as the startup looks to build on its promise to be the leading provider of sovereign generative AI applications in Europe, taking on OpenAI and the hyperscalers.

Aleph Alpha, founded by former Apple AI researcher Jonas Andrulis, has created the Luminous series of large language models (it has promised a 300 billion parameter Luminous World model later this year) which are commercially available for customers now, via a Python client.

Working with partner Graphcore, it has demonstrated some impressive AI compute efficiency progress as well as capabilities with a privacy and explainability focus (see for example its Atman paper of earlier in 2023.)

The Series B funding, it said in a press release on Monday, strengthens the foundation for Aleph Alpha to further advance its proprietary AI research, accelerate development and commercialization of Generative AI for the most complex and critical applications such as in data sensitive industries like healthcare, finance, law, government and security.

At a press conference on Monday, Germanys Minister for Economic Affairs Robert Habeck suggested that the investment played to Europes strategic national priority as work continues to boost its data sovereignty.

The thought of having our own sovereignty in the AI sector is extremely important. If Europe has the best regulation but no European companies, we havent won much Habeck said at the press conference.

Aleph Alpha will continue to expand its offerings while maintaining independence and flexibility for customers in infrastructure, cloud compatibility, on-premise support and hybrid setups said CEO Andrulis.

The ongoing developments will extend interfaces and customization options tailored to business-critical requirements he added in a release.

The massive funding round came as researchers at Google DeepMind suggested in a widely shared paper that the transformer models powering so much of the past years AI hype were not as intelligent, perhaps, as many seem to believe: When presented with tasks or functions which are out-of-domain of their pretraining data, we demonstrate various failure modes of transformers and degradation of their generalization for even simple extrapolation tasks they wrote in a paper on November 3.

Together our results highlight that the impressive ICL [in-context learning] abilities of high-capacity sequence models may be more closely tied to the coverage of their pretraining data mixtures than inductive biases that create fundamental generalization capabilities.

Link:
EU investors including Bosch, SAP pump $500 million into ... - The Stack

Related Posts

Comments are closed.