Page 737«..1020..736737738739..750760..»

Xomnia joins forces with investor for next growth chapter – Consultancy.eu

Dutch data science consultancy Xomnia has roped in growth capital from an investment group, with the ambition to embark on the firms next chapter.

With currently around 85 staff, Xomnia has since its inception ten years ago established itself as one of the leadingconsulting firms in the Netherlands dedicated to data science and related fields such as artificial intelligence, machine learning, and advanced analytics in the cloud.

Together with Foreman Capital, an Amsterdam-based private equity company, Xomnia now aims to fuel its next phase of expansion. This comes on the back of already strong growth in recent years, the firm was already named several times as one of the fastest growing companies in the Netherlands by financial publication FD.

This step is the icing on the cake of Xomnias 10th anniversary, said Ollie Dapper, who founded the company with William van Lith. An exciting next phase has now arrived, and we are grateful to everyone who has helped make it possible!

Jordi Hompes, who came on board in 2022 as chief executive officer, added: We look forward to continue building Xomnia together with Foreman Capital, and jointly shaping our international growth strategy.

Foreman Capital not only is a shareholder that provides the necessary capital for buy-and-build, they also bring the know-how to enable further growth and professionalisation.

Commenting on the trust placed in Xomnias evolution, Foreman Capital investment director Maxime Rosset said, We are impressed by Xomnias position in the Dutch market within data services, a sector with strong international growth prospects that offers ample opportunities for consolidation.

We are convinced that we will together expand Xomnia internationally over the coming years, both organically and through acquisitions.

Xomnia works for clients across sectors, including the likes of ABN Amro, Albert Heijn, DAF Used Trucks, Enexis, Hema, Tata Steel, and VodafoneZiggo.

View post:

Xomnia joins forces with investor for next growth chapter - Consultancy.eu

Read More..

Best practices for fusing internal and external data to enhance credit … – Tearsheet

As a part of Tearsheet' Talks: Lending x Credit x Data, Stripe's Yaakov Erlichman, Head of Capital and Small Business Risk, provides a practical framework for integrating proprietary and third-party data sources in small and medium business lending to drive effective underwriting. He discusses recommendations for sequencing data introduction, balancing model predictive power with applicant friction reduction, optimizing the mix of internal and external data, and managing model degradation.

Yaakov is a seasoned risk management professionalresponsible for leading the credit strategy at Stripe. He joined the company four years ago to build out the risk infrastructure for Stripe Capital. Prior to Stripe, Yaakov was part of the risk teams at Kabbage and American Express.

Learn strategic and tactical best practices for maximizing the fusion of internal and external data to enhance credit decisioning and risk management for SMB lending. The session covers both conceptual and implementation considerations for reconciling data sources tailored to the unique needs of the SMB market.

Yaakov Erlichman, Stripe: I think that one needs to really think about what data they have access to. That's the most important thing. Money is very fungible and in the SMB lending space, there's a lot of lenders out there but those lenders are offering products that are all very similar. So, to differentiate yourself in that market, you really need to use data. Data is ultimately what's going to differentiate because you're going to be able to offer better pricing, approve more customers, and just create a better user experience. Users are going to apply, they're already going to be pre approved, because we know so much information about these users, and it's a really good experience for them.

If you're a standalone lender, it's actually quite difficult, because you're not going to be able to differentiate yourself against anybody else -- it's really just going to be the same data sources that everyone has access to, whether from the commercial bureaus or the consumer bureaus. So you really have to think about how you want to differentiate yourself. And it's actually quite difficult.

The Stripe perspective is that if you are an embedded lender, you should really be leaning in, in the early days, to your own data, because that's really going to differentiate you from anyone else out there. You have something about your customers that no one else knows. So let's say you're going to be a little bit inward looking here, you're a payment processor. And you can see that a user is really consistent in their transaction volume. Lean into that. If they're a coffee shop on the block, and you can see every single day that they have the same number of transactions, that's a pretty good signal as to the riskiness of that business. And you might want to exclude any external signals.

I remember a case that we had we had a boat rental place in the middle of Vermont, and they were so happy that we were able to offer them funding because we didn't look at their bureau score, we didn't see their 500 FICO score, we just saw their consistent transaction volume. And I think that that really is a differentiating factor, so lean into that internal data, especially when you launch.

It's actually really important because if you happen to be an embedded lender, and I'm probably telling you something that's very obvious, you have to be cautious about how you leverage internal data. We'll use the payment processor example and you're saying, Oh, look at this transaction volume, it's really, really consistent. You might be only picking up a small signal on that business, right? You might only be seeing 5% to 10% of their volume -- maybe they use another payment processor for volume that is very volatile. Or maybe they're really heavily into cash and wires. That would be a different signal.

So there is a cautious balance that you need to strike. As you evolve over time, to ensure that you're really understanding that business, the key to all of data is to fully understand the health of a business. Whether the data is internal or external, the business is a business. The business just wants you to understand that. And unlike consumer, where things are quite clean and everyone has access to consumer bureau and most of your trades get reported to the Consumer Bureau, commercial is still the Wild Wild West -- there's nobody who's picked up all the signals and put them into one repository. So it gives you an opportunity to differentiate yourself in that space, if you are able to pick up the right signals and weight those signals in an effective way. But it's critical that you understand that because if you're really leaning in on internal data, you could be erroneous in your decision because you're not fully picking up all the signals.

For the example of the boat rental business, our secret sauce here is that we didn't look at consumer bureau as part of that decisioning logic. We built out our models on our internal data. And again, there was some caution that you're not necessarily seeing the whole picture. We were really confident in the models that we built against our internal data -- we really did a good job of back testing them and having a really strong data science team that was rigorous in their approach in how we treated that internal data. And again, when we launched, we launched with only internal in this case. But over time, we evolved to say that you know what, maybe we could expand our market share if we got additional information -- we kind of eventually evolved similar to what everyone does as a standalone lender to take in banking data, financial data, and other data sources to supplement our internal data.

It's a huge component and really a differentiating factor. Again, money is fungible, but the user experience is a critical part of the journey. Using internal data is a great way to differentiate yourself, because you can come up with much stronger offers to your customer or your prospects. We can go to a user and say, we're 95% sure and even higher that you're approved, basically, for this offer because we are so confident in our models, just click here to take the money. There's no application, there's no information that we're asking to gather from you. And that is a great experience that we can provide.

But, again, that experience can only be provided to some users. Talking to other lenders out there, you can get to a point where you're sourcing data in such a confident way that you can provide that frictionless experience to a user to really say to them, Hey, we know so much about you, before you even show up at our door, here's an offer or here's money, you just click here to accept it. That is really an awesome experience for a customer.

Initially, you're going to have to really lean into heuristics. You're not going to have the data that's going to allow you to build the really rigorous gradient boost models or logistic regression models or even the new modeling techniques that are out there. You're really going to just have to use some industry experience and talk to folks that are really experts in this domain area and really understand okay, what are the parameters that we can put into place soon? What are the basic rules that we can put into place ourselves to keep us out of trouble?

If you're going to be using FICO, what FICO threshold are we going to be comfortable with? And the other thing I would say is to balance that against how much you want to learn. One of my previous employers wanted to know how far into the credit box they could go and still make money. So like, how low of a FICO threshold can you go and still be profitable? And in order to do that, you have to test. You can put some parameters in place. And you can say, Okay, this is how we're going to operate. But you also have a hypothesis like, How much am I willing to invest? And think about it as an investment -- it's really money you're going to lose, but how much money are you really going to invest to able to get the data that you need to be able to build your models?

That's what the heuristics will allow you to do -- to really build out your data sets that will allow you to build your models. Most companies are not going to be in a position to get the right data on day zero, or negative day 10 or negative day 100, to build out the models that will allow them to launch on day zero. So you really are going to have to use heuristics and think about those heuristics as an entryway into buying data that will allow you to build out the models, which is ultimately your objective. Your objective should be to transition from heuristics into models, because models are much more powerful than individual roles.

You're going to need a strong credit risk team that's going to be able to do this process. Somebody who really understands the domain area and is an expert in lending, whatever that lending might be. It could be consumer, commercial, a lot of those skills are transferable across different verticals. But somebody who's able to create that structure, and I think you're really going to need a strong data science team. The successes that we've been able to achieve have really been driven on the backs of our data science team. I think it's also important to call out that a data scientist doesn't just have the label of data scientist and is automatically going to enable your success. I've seen a lot of challenges where companies hire data scientists that are experts in the advertising space, in marketing, or in other areas.

There's one key differentiating factor that I see: time to learn is is very important. When you're a Google or Facebook engineer, and you launch a new algorithm on the site, you can get your results instantaneously -- is this going to lift? On Amazon, is the customer going to buy more products, buy more dresses, or buy more shirts based upon how you adjust the screen or adjust the algorithm or whatever it is?

In lending, your time to learn is months, if not years. You deploy, you have a one year working capital term loan, it could take you 12, 15, 18 months until you actually see performance. A lot of data scientists don't appreciate that time to learn. And a lot of product managers don't think about it as well, which is another important call. We think of it like a roller coaster: you launch the car down, you have no brakes, no controls, you're just going to let that go, you don't know where it's going to end up. Having that frame of mind is critical as you think about financial service products.

I think that you really can lean into the product side to differentiate yourself. You can really take the data that you're able to understand or accumulate about your users and transform that into the product. So for example, let's think about like the cash flows of the user. You can really tailor your products around the risk strategy. So first, you could have a seasonal business, right? Our favorite is our landscapers that are very busy from, let's say, April to October. And you could offer them a product. You could say, I can offer them a product during their busy season. Or you can offer them a product during the low season to help sustain them. That is a differentiating factor.

A lot of lenders can do that are strategic about we shape a product to really meet the needs of our customer. Do we want to help that user during their offseason? Or do we want to help them during their busy season? There's reasons to do both. But even having those insights to inform the product strategy are incredibly powerful. And you can message that, as well: hey, we know that you're going into a low season, or hey, we know that you need to buy inventory and ramp up your business before you go into your busy season. We want to offer you some money to be able to do that. That is hugely differentiating, if you're able to get to that level.

Anyone is able to do that, if you're thinking about that to inform product strategy. I think that a lot of product managers just think about risk as like a checkbox. Can I approve the user: yes or no. And it's very narrow minded. I think that if you expand it and risk can really be a product enabler, to say, how can we scale our products based using sound risk strategy? And that's how we operate here -- risk is really a driver of product growth. Because we are thinking through not just can we keep ourselves out of trouble, but like, how do we optimize our returns? How do we optimize our conversions, using risk as a differentiating factor? And data is what ultimately drives that. That's ultimately what this comes down to.

I'm not going to specifically call out individual vendors. But I think there are only a handful of very unique large vendors that are out there. Actually, I will call out a couple: you have the DNBs, the Lexises, the Equifaxes of the world that are really good at what they do. They have a very broad understanding of a lot of users that are out there. And I think it's worthwhile to have conversations with them to see what they offer.

I will caveat, though, in the commercial space, it is really challenging to use a commercial bureau in an effective way because they don't necessarily have access to the granular information. A lot of the information they have is distilled -- it's a little bit rough around the edges, like they're taking information from various ad hoc trade lines and stuff. They don't really paint a complete picture of a user.

That being said, there are some very effective scores that are out there. DNB has a couple of scores that are quite good. I would say the CCS score that they offer is quite good. It's also quite expensive, but the value is there. If you're starting from scratch, I think those are very effective.

I think over time, though, what I've seen is that the Alloys of the world where you're able to take multiple data sources, and they provide a lot of rigorous balancing and underwriting of the data, could also be a really good way to lean into multiple data sources, rather than you having to set up relationships with a lot of vendors.

I haven't seen a lot of very effective newcomers into this space, which has been unfortunate. I don't see a lot of new entrants that are offering differentiating products on the commercial side. There were a couple of companies that were trying to do it, but it's a very difficult thing to do.

Before I forget, I will say SBFE, if you're not familiar with that, the Small Business Financial Exchange, is kind of a secret sauce. It's a very effective consortium of commercial lenders. They span probably 500 lenders and I don't know if a lot of folks know about this. It means you have to sign up for the consortium and participate in it. They are a great source of information, probably one of the best sources of information, to really understand the risk of a user because commercial trade lines get reported to them. And they've done a really good job of managing that information and effectively disseminating it to their membership.

You have to constantly be evaluating your model performance. Models degrade pretty fast. So I would say you should be looking at your models every three to six months, and constantly determining whether they're still performing as expected. And you should be constantly looking at what additional features you can update in your models -- there's constantly new developments, whether they're internal signals or external signals that will really inform how your models are performing.

One of the things that we added in the last year or two is just the recession signals, basically looking at how the overall economy is performing. When you initially launch, it's probably not relevant. If you have a small portfolio, that's probably not going to be meaningful, but as you start to scale, you could really start to see meaningful differentiation from external signals like that. I would say you should be evaluating what features you could be adding every quarter and then you should just be monitoring your models probably every three to six months and deploying new versions of those models, at least twice a year.

More here:

Best practices for fusing internal and external data to enhance credit ... - Tearsheet

Read More..

Mitigating Poor Data Quality With Decision Intelligence Solutions – Spiceworks News and Insights

Explore solutions to poor data quality challenges in investigations with insights from Tom Saltsberg, Product Manager at Cognyte. Discover how Decision Intelligence enhances analytical outcomes.

Investigators and intelligence analysts are frequently challenged to make sense of poor quality. Poor data quality presents fundamentally unique challenges in part because poor is a relative term and in most cases, analysts cannot recognize whether data is of poor quality at first glance, thereby needing to assess it beforehand, which is time-consuming and inefficient.

Poor data quality can manifest in many ways and often be attributed to human error. This could include data with incorrect information, including misspellings and typos. Alternatively, it could manifest as partial or missing data or data inadvertently duplicated or ingested incorrectly. Data thats correct and otherwise high quality but considered outdated could also contribute to poor data quality, depending on the investigations objectives. Investigators must also grapple with fraudulent data data that are not credible and deliberately misleading a virulent strain of poor-quality data intended to distract and confuse them. Ironically, fraudulent data will probably seem the most credible since the actors who prepared it invested time and effort to give it a sense of authenticity.

There are many techniques for cleaning and validating data. Cross-referencing data with other sources is crucial for validating data quality and is easily achievable with small data sets. But large data sets can make it difficult or in some cases, impossible for analysts to correlate this data manually.

One of the main goals for investigations assessing large volumes of data (customs, finance, etc.) is to identify trends and anomalies, such as undervaluations or misreports, amid the deluge via quantitative assessment. In law enforcement investigations, on the other hand, data sets could be considerably smaller and easily organized, with fewer sources to be correlated in a more qualitative assessment.

See More: How to Integrate New Data During Acquisition

What happens when only a single source of data and cross-referencing is impossible? This is an obstacle for analysts correlating data manually and machine learning (ML) algorithms assessing data in volume for statistical insights. Theres a misconception that big data can answer these challenges for law enforcement agencies, but big data on its own is a solution.

Multiple approaches could be used to overcome these data quality challenges, including processing the data on a mass scale using data science and ML. However, data science requires that we understand and carefully deliberate on the features to be calculated by the ML model at the outset of the solution design. Data quality plays a major role in this feature-generation phase when techniques are optimized to understand and complete missing or low-quality data values.

Data quality challenges can impact various applications, investigative and otherwise. Customs fraud is a classic example, and its rampant and exceedingly difficult to track. Customs documentation is often scanned, making extracting and referencing relevant data harder. These documents could also include ad hoc manual notations from port/customs administrators (human error may be introduced here), and language gaps can also be problematic in customs documentation.

Numerous examples of customs fraud perpetrated or aided by leveraging intentionally misleading (poor quality) data. This could include shipped goods deliberately misclassified or undervalued for tax evasion purposes. Also, information intended to mislead the authorities could manifest as partial or obfuscated, fragmented data obscuring certain trade activities true purpose. For instance, vehicle parts are shipped separately, as assembled vehicles, to avoid regulation constraints or taxation (a technique known as structuring).

Law enforcement is another domain thats rife with data quality issues and these issues, in some cases, arent identified until decades after the crimes were investigated and justice meted out by the legal system. High-profile justice campaigns are being mounted, consolidated, and visualized based on flawed forensics data or human error in the case files.

According to the Innocence ProjectOpens a new window , false or misleading forensic evidence has contributed to 24% of wrongful convictions nationally in the U.S., based on data from the National Registry of Exonerations, which tracks DNA and non-DNA-based exonerations. This includes convictions based on forensic evidence that is unreliable or invalid.

Disinformation (fake news) is a fascinating example of poor data quality. Though the data quality of disinformation is fundamentally bad, considerable care has been taken in many cases to make the data appear good.

Disinformation strategies are often applied in geopolitics and organized crime. They could include decoy and fake data sets designed to interrupt or influence democratic elections or distort intelligence gathering, for example. Disinformation is also unique in that it can be practiced enormously, including content planted in mass media and social media to sway public opinion and votes.

Poor data quality can also amplify known analyst biasesOpens a new window . Cognitive bias is a foundational problem affecting criminal investigation in all its forms, and its exceedingly difficult to compensate for with existing technological methods.

Examples could include anchoring bias (analyst over-reliance on primary information), groupthink bias, and bias that artificially inflates the importance of information that supports the analysts position. In the future, AI technology advancements will hopefully make it possible to reduce or eliminate these biases so that clear thinking can prevail.

See More: Why Composable Data Matters for Collaboration and Safety

Decision intelligence platforms that provide scalable, multi-featured data collection, fusion, and analysis capabilities are naturally beneficial for completing the investigative picture when data is incomplete or inaccurate.

A centralized system is beneficial it allows analysts to fuse a lot of data from multiple and siloed data sources agnostic to the format (e.g., this overcomes the challenges with scanned documents used to perpetuate customs fraud). Data fusion also enables consolidating and visualizing the data for more efficient assessment by creating entities and profiles and not inspecting every data point independently.

A centralized decision intelligence system also enables data enrichment from multiple angles to provide a more holistic view of the data and the insights reflected therein. Finally, a centralized system automates cross-referencing, overcoming many of the above challenges. The data ingested into a centralized system can be readily cross-referenced, verified, and validated to improve overall data quality.

Centralized decision intelligence ultimately enables analysts to derive more meaningful insights from data whether the data is high quality or low quality to help them make more informed decisions and prioritize effectively during investigations.

How can decision intelligence transform investigations? Let us know on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . Wed love to hear from you!

Image Source: Shutterstock

See the original post here:

Mitigating Poor Data Quality With Decision Intelligence Solutions - Spiceworks News and Insights

Read More..

$2 Million Gift to Baylor University Establishes Medical Humanities … – Baylor University

Contact:Lori Fogleman, Baylor University Media & Public Relations, 254-709-5959Follow us on Twitter:@BaylorUMedia

WACO, Texas (Oct. 26, 2023) Baylor University today announced a $2 million gift from Scott and Susan Orr of The Woodlands, Texas, establishing the Scott & Susan Orr Family Endowed Chair in Medical Humanities & Christian Faith to support teaching, mentorship and innovative research in the Medical Humanities Program within the College of Arts & Sciences. This endowed chair brings the total number of new endowed chair and faculty positions funded by the Give Light campaign to 45.

The Orr Family Chair supports the Health and Human Flourishing, Leadership and Ethics initiatives within the Universitys strategic plan Illuminate and qualifies for matching support through the Give Light Campaigns Illuminate Chair Matching Program. The Scott & Susan Orr Family Endowed Chair in Medical Humanities & Christian Faith is the second endowed chair established within the Medical Humanities Program since 2020.

We are truly grateful for Scott and Susan Orr and for their familys generous support of the faculty of Baylor University, said Baylor President Linda A. Livingstone, Ph.D. The Orr Chair provides significant resources with which to support our faculty within the Department of Medical Humanities and to foster strategic growth within the Department. We are truly grateful for this familys commitment to Baylors Christian mission and vision, and we are grateful for the support this provides to our faculty, who invest of their time and talents in our students through transformational teaching and mentoring.

The Orr Family Chair will be housed within the Medical Humanities Program, one of the few of its kind in the country for students who aspire to careers in healthcare and other affiliated professions. The program is intended to pair foundational science curriculum with courses in history, literature, Christian philosophy and other disciplines to inspire deep discussion and critical thinking, providing a holistic, interdisciplinary approach for many pre-health students. The Orr Family Chair will support the programs Christian identity by fostering an appreciation with students of the importance of the doctor-patient relationship, the spiritual and emotional dimensions of disease and a heightened awareness of the human experience of illness.

The Scott & Susan Orr Family Endowed Chair in Medical Humanities & Christian Faith was established by the Orrs, who became connected to Baylor University through their children, Andrew Orr, B.B.A. 15, M.B.A. 17, and Jennifer (Orr) Bondaruk, B.S.N. 16, as well as Jennifers husband, Mark Bondaruk, M.Div. 19. Jennifer and Andrew both participated in the Medical Humanities Program, becoming students of Lauren A. Barron, M.D., inaugural Michael E. DeBakey, M.D., Selma DeBakey and Lois DeBakey Chair for Medical Humanities, clinical professor and director of the Medical Humanities Program.

The Christian mentorship, teaching and compelling research of our faculty is central to the mission of the College of Arts & Sciences Medical Humanities Program, and we are grateful to Scott and Susan Orr for this endowment, said Lee Nordt, Ph.D., dean of the College of Arts & Sciences. It will provide transformative educational experiences for our students. We look forward to the additional support the Orr Chair will provide as this program continues to grow and flourish.

The Orrs motivation for this gift is to recruit and retain faculty leaders who will continue the programs rich tradition of intentional mentorship, fostering deep discussions and inspiring prospective physicians, nurses, healthcare administrators and other allied healthcare professionals through their teaching on the history, impact and role of the Christian faith in healthcare delivery and administration.

The Medical Humanities Program at Baylor is very special to our family, and were thrilled to be able to support the program in this enduring way with our gift, Scott Orr said. "We have the utmost confidence in Dr. Barron as she leads and grows the program and are very excited to see the programs impact on students in all areas of healthcare.

Healthcare and faith touch and impact us all, and with this gift we hope that students for years to come will have the opportunity to use what theyve learned through the program to make a practical impact on the lives of patients in need of healthcare, Susan Orr said.

The support through the Illuminate Matching Chair Program for the Scott & Susan Orr Family Endowed Chair in Medical Humanities & Christian Faith will provide greater support for the chairs research and other activities related to academic discovery and instruction. The matching program supports the Universitys efforts to generate high-impact research and scholarship, focusing especially upon research faculty chairs that support Illuminates five academic initiatives: Health; Data Sciences; Materials Science; Human Flourishing, Leadership and Ethics; and Baylor in Latin America.

A familys legacy of support

Scott and Susan Orr and their family have a deep history in medicine and healthcare, and they are committed followers of Jesus Christ. With this gift, Scott and Susan hope to bring these two passions together to benefit future generations at Baylor.

Scott and Susan both attended the University of Puget Sound and were married in 1986. Scott went on to obtain a juris doctor at Western State University College of Law. Scott spent his career in the healthcare field, initially working for Medical Evaluation Specialists, becoming its president and general counsel, and then Veterans Evaluation Specialists, as its senior vice president and general counsel. Scott and Susan raised Andrew and Jennifer in El Dorado Hills, California, and when Andrew was in 11th grade, he accompanied Scott on a business trip to Dallas, following which they drove down to Waco to tour Baylor. Andrew decided on the spot that Baylor was where he wanted to attend college, and Jennifer ended up following him two years later. Scott and Susan visited Andrew and Jennifer often over the course of their time at Baylor, and they became connected to the Waco community and the Medical Humanities Program, initially through founding director Michael Attas, M.D., and later through Dr. Barron. They were captivated by Dr. Barrons passion for mentoring students and her strong belief in the role of faith in healthcare.

Following their time at Baylor, Scott and Susans children both have gone on to careers in the healthcare field. Andrew is the vice president of operations at Health by Design. While at Baylor, Andrew was a member of the Medical Service Organization, American Medical Student Association, Future Healthcare Executives and Alpha Lambda Delta honor society. Andrew and his wife, Callie, are active in their church and community in Boerne, Texas, and they recently welcomed their first child, a son, Rowan MacGregor, to their family. Jennifer Bondaruk is a registered nurse, working in both the ED and ICU, and is completing a masters degree in nursing. While at Baylor, Jennifer was a member of the National Student Nurses Association, Medical Services Organization and the Baylor Student Nurses Association. Jennifer and Mark have a daughter, Maya James, and they enjoy serving their community in Flagstaff, Arizona.

Scott and Susan are retired and enjoy spending time traveling and visiting Maya and Rowan.

Baylor publicly launched the Give Light campaign on Nov. 1, 2018. To date, the campaign has raised $1.4 billion. The Campaign has seen 96,164 alumni, parents and friends give to the Universitys priorities, as well as establishing 818 endowed scholarships and 45 endowed faculty positions. For more information or to support Give Light: The Campaign for Baylor, visit the Give light website.

ABOUT BAYLOR UNIVERSITY

Baylor University is a private Christian University and a nationally ranked Research 1 institution. The University provides a vibrant campus community for more than 20,000 students by blending interdisciplinary research with an international reputation for educational excellence and a faculty commitment to teaching and scholarship. Chartered in 1845 by the Republic of Texas through the efforts of Baptist pioneers, Baylor is the oldest continually operating University in Texas. Located in Waco, Baylor welcomes students from all 50 states and more than 100 countries to study a broad range of degrees among its 12 nationally recognized academic divisions.

ABOUT THE COLLEGE OF ARTS & SCIENCES AT BAYLOR UNIVERSITY

The College of Arts & Sciences is Baylor Universitys largest academic division, consisting of 25 academic departments in the sciences, humanities, fine arts and social sciences, as well as 11 academic centers and institutes. The more than 5,000 courses taught in the College span topics from art and theatre to religion, philosophy, sociology and the natural sciences. The Colleges undergraduate Unified Core Curriculum, which routinely receives top grades in national assessments, emphasizes a liberal education characterized by critical thinking, communication, civic engagement and Christian commitment. Arts & Sciences faculty conduct research around the world, and research on the undergraduate and graduate level is prevalent throughout all disciplines. Visit the College of Arts & Sciences website.

Follow this link:

$2 Million Gift to Baylor University Establishes Medical Humanities ... - Baylor University

Read More..

Discussing About Artificial Intelligence (AI) in Data Science with Damodarrao Thakkalapelli -Data Solutions Architect – The Tribune India

Systems and methods for evaluating, validating, correcting, and loading data based on Artificial Intelligence Input (AI).

For any critical system that deals with huge data always the main challenge is to exchange data with accuracy and on time to avoid SLA breach. Importing data from source file should happen smoothly without any failures. If data load fails due to data anomaly within the source feed or any due to any network issue will impact SLA and business. Here is where the invention "Fix and reload the rejected data automatically within SLA" will provide the solution to minimize the risk by applying the auto correcting techniques proactively on the issues by applying algorithms on historical data and logs.

Description:

An electronic system may be configured to receive data feeds from sources and load the data feeds to data structures. The electronic system may be configured to perform the process of receiving and loading the data feeds in accordance with a service level agreement establishing expected characteristics of the process, such as a speed at which the process is completed, a period by which the process is to be completed, and/or the like.

Current existing techniques also do not enable real-time synchronization between different servers. For example, databases in a production server may not be efficiently synchronized with a development server for development and testing purposes. In absence of databases being real-time synchronized to development servers, any errors in production server may perpetuate in the production server or may need to be corrected using inefficient trial and error techniques.

Migrating databases between servers corresponding to different network database applications may be a resource intensive process, both in terms of manpower, time, and computing resources. During the migration process, an enterprise organization may need to parallelly maintain production and development servers for each of the network database applications. Using parallel servers may result in inefficiencies for the enterprise organization. For example, monitoring may need to be done parallelly in both the existing servers and the new servers (e.g., for performance validation of databases migrated to new servers). Further, development processes may need to be completed in the previous server prior to migration of databases to the new server. Separate teams may need to be maintained for each of the servers while the migration is underway and for determining whether the migration to the new servers is successful and beneficial. Further, it may only be possible to determine the performance of the databases in the new server once all the databases have been migrated to the new server

About Author:

Damodarrao Thakkalapelli is passionate about solving enterprise data architecture problems and ensuring customers achieve their desired business outcomes. He designed comprehensive solutions as a solution architect, considering a variety of elements like hardware, software, network infrastructure, and data management systems.

As a Solution Architect, he assesses many technological possibilities and decides wisely based on compatibility, cost, and best practices in the industry. He oversees the implementation process, directs development teams, and manages technical issues. He also makes recommendations for the technologies that are most appropriate for the organization's long-term strategy. He ensures the implemented solution adheres to the design principles, meets quality standards, and fulfils business requirements.

As an Architect he always investigates, identify, and assesses risks associated with the solution, such as security vulnerabilities, data privacy concerns, and performance bottlenecks. He develops strategies to mitigate risks and ensure the solutions reliability and robustness. He continuously evaluates the implemented solution, gathers feedback, and identifies areas for improvement. He stays updated with emerging technologies, industry trends, and best practices, incorporating them into future solution designs.

Disclaimer : The above is a sponsored article and the views expressed are those of the sponsor/author and do not represent the stand and views of The Tribune editorial in any manner.

#Artificial Intelligence AI

See more here:

Discussing About Artificial Intelligence (AI) in Data Science with Damodarrao Thakkalapelli -Data Solutions Architect - The Tribune India

Read More..

Young leaders learn from Nobel Laureates at Science and … – Lawrence Livermore National Laboratory (.gov)

Early-career staff scientists Kelli Humbird, Chris Young and Brian Giera connected with Nobel Laureates and discussed important global issues ranging from AI to climate change at the 20th annual meeting of the Science and Technology in Society (STS) Forum in Kyoto, Japan.

Lab Director Kim Budil, Acting Chief of Staff Ashley Bahney and Strategic Deterrence Associate Director Mark Herrmann also attended the annual meeting in early October, along with nearly 1,000 scientists and leaders from across the world in the fields of politics, business and technology.

It is an honor for representatives of Lawrence Livermore National Laboratory, especially our young leaders, to participate in these important discussions about the impacts and implications of science and technology from a long-term perspective, looking far into the future, Budil said.

Budil was a speaker on a panel on fusion energy chaired by Pascal Colombani, former chairman and CEO of the Atomic Energy Agency. The panel also featured Satoshi Konishi, co-founder and chief fusioneer of Kyoto Fusioneering Ltd.; Pietro Barabaschi, director-general of the International Fusion Energy Organization (ITER); and Zensho Yoshida, director general of Japans National Institute for Fusion Sciences.

Other speakers and panels covered artificial intelligence, preparation for the next pandemic, digital equity, healthy aging, deep-sea exploration and exploitation, human activity in space, climate change and international collaboration. Los Alamos National Laboratory Director Thomas Mason chaired a panel on action for net-zero emission. After each panel, attendees participated in small group discussions of relevant topics.

Giera, Humbird and Young also participated in the Dialogue between Young Leaders and Nobel Laureates, in which they were each able to meet two Nobel Laureates. Bahney, who previously attended the STS Forum as a Young Leader, joined an alumnae group. Young Leaders must be 40 years or younger, have strong leadership roles and have a Ph.D., M.D. or equivalent degree.

The meeting was a unique experience. Unlike many scientific conferences that are usually focused on a single field of research like plasma physics, this event had a diverse set of attendees. There were folks from all over the world, some researchers, some CEOs, government officials and more, so the conversations on specific topics were quite fascinating with such a wide array of perspectives, said Humbird, a team lead for inertial confinement fusion (ICF) cognitive simulations.

Both Humbird and Giera were inspired by Wolfgang Ketterle, an MIT professor who won the 2001 Nobel Prize in physics for creation of the first gaseous Bose-Einstein condensate, a phase of matter.

A key takeaway from our conversation with Nobel Laureate Ketterle was to take big risks, but to believe in yourself while doing so, said Giera, director of the Data Science institute.

Young met with J. Georg Bednorz, winner of the 1987 Nobel Prize in physics for his work at IBM Zurich developing new oxide materials that demonstrated superconductivity at higher temperatures than was previously possible, and Ada E. Yonath, winner of the 2009 Nobel Prize in chemistry for her work studying the structure and function of the ribosome.

Both described how their work went against the common thinking in their fields at the time and required extensive persistence in the face of failure before eventually producing their respective breakthroughs, said Young, lead designer for the Hybrid-E 1050 Coupling campaign on the National Ignition Facility.

All three of LLNLs Young Leaders say they will carry forward the experience of the STS Forum into their daily work.

Read the original here:

Young leaders learn from Nobel Laureates at Science and ... - Lawrence Livermore National Laboratory (.gov)

Read More..

In a 1st, AI neural network captures ‘critical aspect of human … – Livescience.com

Neural networks can now "think" more like humans than ever before, scientists show in a new study.

The research, published Wednesday (Oct. 25) in the journal Nature, signals a shift in a decades-long debate in cognitive science a field that explores what kind of computer would best represent the human mind. Since the 1980s, a subset of cognitive scientists have argued that neural networks, a type of artificial intelligence (AI), aren't viable models of the mind because their architecture fails to capture a key feature of how humans think.

But with training, neural networks can now gain this human-like ability.

"Our work here suggests that this critical aspect of human intelligence can be acquired through practice using a model that's been dismissed for lacking those abilities," study co-author Brenden Lake, an assistant professor of psychology and data science at New York University, told Live Science.

Related: AI's 'unsettling' rollout is exposing its flaws. How concerned should we be?

Neural networks somewhat mimic the human brain's structure because their information-processing nodes are linked to one another, and their data processing flows in hierarchical layers. But historically the AI systems haven't behaved like the human mind because they lacked the ability to combine known concepts in new ways a capacity called "systematic compositionality."

For example, Lake explained, if a standard neural network learns the words "hop," "twice" and "in a circle," it needs to be shown many examples of how those words can be combined into meaningful phrases, such as "hop twice" and "hop in a circle." But if the system is then fed a new word, such as "spin," it would again need to see a bunch of examples to learn how to use it similarly.

In the new study, Lake and study co-author Marco Baroni of Pompeu Fabra University in Barcelona tested both AI models and human volunteers using a made-up language with words like "dax" and "wif." These words either corresponded with colored dots, or with a function that somehow manipulated those dots' order in a sequence. Thus, the word sequences determined the order in which the colored dots appeared.

So given a nonsensical phrase, the AI and humans had to figure out the underlying "grammar rules" that determined which dots went with the words.

The human participants produced the correct dot sequences about 80% of the time. When they failed, they made consistent types of errors, such as thinking a word represented a single dot rather than a function that shuffled the whole dot sequence.

After testing seven AI models, Lake and Baroni landed on a method, called meta-learning for compositionality (MLC), that lets a neural network practice applying different sets of rules to the newly learned words, while also giving feedback on whether it applied the rules correctly.

Related: AI chatbot ChatGPT can't create convincing scientific papers yet

The MLC-trained neural network matched or exceeded the humans' performance on these tests. And when the researchers added data on the humans' common mistakes, the AI model then made the same types of mistakes as people did.

The authors also pitted MLC against two neural network-based models from OpenAI, the company behind ChatGPT, and found both MLC and humans performed far better than OpenAI models on the dots test. MLC also aced additional tasks, which involved interpreting written instructions and the meanings of sentences.

"They got impressive success on that task, on computing the meaning of sentences," said Paul Smolensky, a professor of cognitive science at Johns Hopkins and senior principal researcher at Microsoft Research, who was not involved in the new study. But the model was still limited in its ability to generalize. "It could work on the types of sentences it was trained on, but it couldn't generalize to new types of sentences," Smolensky told Live science.

Nevertheless, "until this paper, we really haven't succeeded in training a network to be fully compositional," he said. "That's where I think their paper moves things forward," despite its current limitations.

Boosting MLC's ability to show compositional generalization is an important next step, Smolensky added.

"That is the central property that makes us intelligent, so we need to nail that," he said. "This work takes us in that direction but doesn't nail it." (Yet.)

Read the original post:

In a 1st, AI neural network captures 'critical aspect of human ... - Livescience.com

Read More..

What Is Hybrid Cloud Security? How it Works & Best Practices – eSecurity Planet

eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Hybrid cloud security is a framework for protecting data and applications in a computing environment that includes both private and public clouds. It combines on-premises and cloud-based resources to satisfy an organizations diversified computing demands while ensuring strong security. This approach to cloud computing enables enterprises to benefit from the scalability and flexibility provided by public clouds while maintaining sensitive data within their own infrastructure.

As more businesses embrace hybrid cloud models to cater to their different computing demands, safeguarding the boundary between these environments has become critically important, making hybrid cloud security a top priority for ensuring protection, compliance, and resilience in an ever-changing digital ecosystem.

See our guides to public and private cloud security

Hybrid cloud security combines on-premises controls and practices with cloud-specific solutions, reinforcing data and application protection between environments. Hybrid cloud security starts with analyzing and categorizing data and progresses to customized security measures. Hybrid cloud security generally follows best practices for network security and cloud security:

These components work together to establish a complete hybrid cloud security strategy, but the specific components and their configuration will vary depending on the organizations security needs and the cloud services it employs.

A hybrid cloud architecture primarily involves integrating different types of cloud and on-premises technology to fulfill an organizations unique demands. Here are some examples of hybrid cloud security architectures.

An enterprise in this case combines its on-premises data center or infrastructure with a public cloud. Some workloads, apps, or data may be hosted on the organizations own servers, while others may be offloaded to a public cloud provider such as AWS, Azure, or Google Cloud.

Here, businesses can combine a public cloud with a private cloud, which may be housed in a dedicated data center. They use the public cloud for some processes and services, but keep a private cloud for more sensitive data or mission-critical applications.

Businesses may mix various public cloud providers, private clouds, and on-premises technology in more complex setups. This enables them to select the most appropriate environment for each workload, application, or data type.

Data synchronization is critical in hybrid cloud architectures to provide consistency across infrastructures. Connecting private clouds, legacy systems, and public clouds through the internet or private networks guarantees that data and applications flow seamlessly. A single management tool facilitates supervision because managing numerous cloud environments independently can be complicated due to differences in APIs, SLAs, and features from different providers. This provides a centralized interface for effective control and monitoring of hybrid cloud resources.

A hybrid cloud infrastructure gives enterprises a scalable, adaptable, and cost-effective solution that prioritizes data protection, privacy, and disaster recovery. This approach ensures business continuity and adaptation to changing demands by allowing for smooth resource allocation and cost control.

Hybrid clouds offer flexibility for enterprises with a wide range of demands and endpoints. They enable you to effortlessly move between on-premises and cloud servers based on your needs. You may manage your infrastructure at your own speed and respond quickly to changing demands.

It can be expensive to set up and manage on-premises data centers. By transferring resource-intensive activities to the cloud, a hybrid cloud approach can allow for cost-effective solutions. Cloud companies charge depending on consumption, which can lower infrastructure and maintenance costs, particularly for companies trying to meet fluctuating demand. Real-time monitoring and clear payment alternatives help with expenditure control.

Hybrid architecture is extremely scalable, allowing for company expansion by adding or deleting cloud servers as required. Employees may connect to the office system using a variety of devices without the need for extra hardware. Depending on demand, operations can be scaled up or down to optimize expenses.

Large amounts of data may be stored and analyzed in the cloud. To guard against cyber attacks, cloud systems include powerful security features such as encryption, firewalls, authentication, and data backups. Data security is improved by privacy features like number masking and dynamic caller IDs. Hybrid solutions enable you to preserve sensitive data on private clouds while keeping general data on public servers.

Cloud bursting allows workloads to be expanded to a public cloud during demand surges and then scaled down to the original server. This rented resource solution saves money and time while adjusting to changing workloads.

If security, privacy and regional compliance demands are met, storing or backing up critical data on cloud servers improves disaster recovery capability. Multiple backups provide data management even in the face of unforeseen occurrences like natural catastrophes. Because cloud-based operations can be expanded and controlled from anywhere, they provide business continuity in crisis scenarios.

When compared to typical security methods, securing a hybrid cloud environment brings unique challenges, particularly for enterprises with stringent regulatory requirements and established procedures. Some areas of concern include:

It is important to understand the shared responsibility of your company and cloud service providers. Cloud providers protect the infrastructure, but clients must protect their data and applications.

How to address this challenge: To protect data and applications, ensure that providers can satisfy regulatory requirements and incorporate business continuity and disaster recovery strategies in service level agreements (SLAs). And keep tight controls on access and other frequent cloud security mistakes.

When issues develop within the infrastructure of a cloud service provider, teamwork is required to resolve them. Issues such as data commingling in multicloud systems, data privacy influencing log analysis, and disparities in defining what constitutes an event can all provide difficulties.

How to address this challenge: To reduce downtime and data exposure, enterprises should define explicit incident response plans, including communication methods, and verify they comply with the cloud providers policies.

Cloud applications are vulnerable to a variety of security risks, and a range of products address certain areas of this issue, such as software development life cycle security, authentication, compliance, monitoring, and risk management. Managing them separately can be difficult logistically, so look for solutions that incorporate various security roles.

How to address this challenge: Organizations should take a DevSecOps approach to security, including it in the application development lifecycle. Using automated security testing tools and doing frequent code reviews helps to protect the integrity of apps.

Because sensitive data is dispersed across several environments in hybrid cloud security, consistent security procedures and monitoring are required to prevent exposure and breaches.

How to address this challenge: Using a data-centric security approach, such as data encryption, data classification, access restrictions, and data loss prevention solutions, may help protect sensitive information no matter where it is stored.

Because of the requirement to follow varying standards across numerous cloud environments, compliance and auditing pose issues in hybrid cloud security, demanding complicated monitoring, reporting, and adherence processes.

How to address this challenge: To ease the compliance process, organizations should establish a centralized compliance and auditing system that uses automated technologies to monitor and report on the compliance status of their hybrid cloud environment.

While specific configurations may differ, adopting these best practices assists businesses in mitigating risks and successfully responding to security challenges.

Encrypting data in transit and then examining it guarantees that sensitive information is kept private during transmission while also allowing for the discovery of any possible security risks or breaches. This way, security is ensured on both ends.

Continuous configuration monitoring and auditing aid in detecting deviations from defined security standards and policies, ensuring that the hybrid cloud system stays compliant and safe. Monitor and audit settings across all of your clouds and data centers on a regular basis. Misconfigurations, which are frequently the consequence of human mistakes, are a major source of vulnerabilities. Automation is a useful technique for ensuring secure setups.

Vulnerability scans uncover possible flaws in the system, allowing for quick correction to strengthen security against hostile actors. Conduct vulnerability checks on a regular basis to uncover weak places in your infrastructure. Make use of automated solutions that prioritize vulnerabilities based on risk profiles to ensure efficient and successful remediation.

Applying security updates on a regular basis keeps software and systems up to date, addressing known vulnerabilities and improving the hybrid cloud infrastructures security posture. By shortening the period between patch release and implementation, the window of opportunity for possible cyber attacks is reduced.

To reduce the danger of unauthorized access or lateral movement by attackers, zero trust security necessitates strong authentication and access rules that regard all users and devices as untrusted entities. Implement security principles based on zero trust, which prioritize least-privilege access and strong authentication.

Create an effective response strategy in the event of a security compromise. In the case of a security breach or disaster, a recovery plan specifies how to restore services and data while minimizing downtime and data loss and guaranteeing business continuity. Keeping backup storage separate from the original data source reduces the chance of a single point of failure and speeds up remediation operations.

Endpoint security solutions, such as EDR and multi-factor authentication, prevent illegal access and data breaches by securing endpoints such as devices and user access points. While cloud computing has revolutionized company security, endpoints could still remain a weak link. It is critical to protect data going through and between these devices.

The hybrid cloud security landscape is continuously expanding, and several major companies now offer comprehensive solutions to protect your data and apps in hybrid environments. Here are three of the top hybrid cloud security solutions to consider.

Acronis Cyber Protect Cloud specializes in providing comprehensive services to safeguard data across various environments, particularly in hybrid cloud setups, making it a good option for organizations seeking to secure and manage their data in complex, multi-cloud, and on-premises environments.

Key Features: Acronis includes AI-based antivirus, anti-malware, and anti-ransomware technologies for proactive threat prevention, as well as fail-safe patching, forensic backup, and continuous data protection.

Services: Data backup and recovery, cybersecurity tools against malware, ransomware, and other threats, and services for data storage and management.

Unique Offering: AI-Based Protection, blockchain technology, and integrated data protection.

Skyhighs Cloud Native Application Protection Platform offers an all-in-one solution for securing cloud-native applications, encompassing a risk-based perspective on application and data context.

Key Features: Skyhighs CNAPP examines workloads, data, and infrastructure in real time, detecting misconfigurations, software vulnerabilities, and sensitive data. For comprehensive security, it defends against configuration deviations, automates assessments, and supports short-lived workloads with application allow-listing, workload reinforcement, integrity monitoring, and On-Premises Data Loss Prevention (DLP) Scanning.

Services: Offers a unified set of controls based on an integrated platform, customer assistance, and expert guidance.

Unique Offering: Skyhigh (formerly McAfee MVISION) is a pioneering platform that integrates application and data context, combining Cloud Security Posture Management (CSPM) for public cloud infrastructure and Cloud Workload Protection Platform (CWPP) for application protection across virtual machines, compute instances, and containers.

Trend Micro Cloud One platform has broad support across public cloud providers (AWS, Google Cloud, Azure), VMware-based private clouds, and on-premises storage.

Key Features: Trend Micro offers AI and ML-powered vulnerability analysis, a bug bounty program for zero-day attack readiness, contributions from 15 global research centers, managed detection and response services, protection for cloud-native applications, and versatile integrations via native APIs. Advanced automation enhances vulnerability detection and compliance monitoring.

Services: Managed detection and response, threat analysis, and professional assistance are all available through the platform.

Unique Offering: Provides full coverage, including open source assets, filling a critical cybersecurity gap. Trend Micros relationship with Snyk offers specific coverage for open source assets, making it a good option for businesses that already rely on open source.

Businesses should explore hybrid clouds if they have dynamic workloads, seasonal swings, need gradual cloud adoption, or want flexibility in the face of an uncertain future. Hybrid clouds allow businesses to adapt at their own speed, giving financial relief and a safety net for those hesitant to embrace full-scale changes. Hybrid cloud security, which combines traditional on-premises security practices with cloud-specific measures, ensures a comprehensive defense strategy, allowing organizations to benefit from cloud computing while effectively safeguarding their data and applications from evolving cyber threats and regulatory compliance issues.

Read next:

View original post here:
What Is Hybrid Cloud Security? How it Works & Best Practices - eSecurity Planet

Read More..

Microsoft, Alphabet and Amazon: AI instrumental in cloud race as … – Euronews

Cloud services are turning tech companies around after last years slump.

Big Tech is making a comeback after a difficult year as the sectors heavyweights report their quarterly figures this week with investment in generative artificial intelligence (AI) paying off amid the cloud service race.

In 2022, tech companies saw massive layoffs and other cost-cutting measures after advertisers and consumers cut back on spending amid the unsettling economic climate.

But the launch of OpenAIs ChatGPT chatbot late last year is helping to turn tech companies around.

Microsoft, which has invested $10 billion (9.4 billion) in OpenAI, on Tuesday reported revenue growth of almost 13 per cent year-on-year to $56.5 billion (53 billion).

The tech giant's AI investments helped boost sales in the September quarter, especially in its Azure cloud programme.

"With copilots, we are making the age of AI real for people and businesses everywhere," Satya Nadella, chairman and chief executive officer of Microsoft, said in a statement.

"We are rapidly infusing AI across every layer of the tech stack and for every role and business process to drive productivity gains for our customers.

As for Googles parent company Alphabet, the company posted quarterly sales of $76.69 billion (72 billion), up 11 per cent from the same period a year ago.

But growth in Alphabets cloud sector was almost at a three-year low and cloud revenue was $20 million (18 million) less than analysts expected.The market reacted again on Wednesday with shares in Alphabet down 8 per cent.

Google has been racing to add generative AI to more of its products and Alphabets cloud division has been trying to catch up with Microsofts Azure and Amazon Web Services (AWS).

On Wednesday, AWS announced a major cloud development, saying the company will launch an independent cloud service in Europe.

Amazon Web Services European Sovereign Cloud will carve out a space for highly regulated companies and the public sector to store and keep data in the European Union.

It comes as the bloc gets tough on data being kept on non-European company servers and as it pushes for digital sovereignty, the idea that the EU should control its data and technology.

Amazon is due to release its Q3 results on Thursday.

Continue reading here:
Microsoft, Alphabet and Amazon: AI instrumental in cloud race as ... - Euronews

Read More..

‘Shift Left and Stretch Right’ Is the Future of SDVs – EE Times Europe

In the world of embedded systems, software and hardware have traditionally been intricately connected. Given the resource constraints and deadlines, developers are compelled to ensure flawless integration between software and hardware. That, in turn, usually means extensive rounds of on-device testing to ensure interactions between interrupts, device drivers and application-level logic work as intended.

This traditional approach to development in embedded systems is increasingly at odds not just with rapid product development lifecycles but with the demands of service-led business models. A prime example lies in automotive design. OEMs are embracing the concept of the software-defined vehicle (SDV). When a vehicle goes out onto the market today, its capabilities are almost completely fixed, with only critical firmware updates applied during regular garage-service visits. The SDV is designed to be enhanced over its entire lifetime.

Making the SDV possible calls for a platform approach. Much of the software today runs on a broad spectrum of dedicated microcontrollers distributed around the car, arranged in many subsystems. Currently, this software changes infrequently. Looking ahead, to allow future functional upgrades via software, these functions need to be consolidated into a smaller number of high-performance multicore microprocessors. The in-vehicle network delivers real-time messages to control motors and other actuators in the car and transfers sensor inputs to the many tasks that run in each multicore processor complex.

The decoupling of hardware and software allows greater flexibility in application and architecture design and allows for increases in functionality over time (Figure 1). Software defines the experience for the driver by improving safety, cutting energy consumption and improving vehicle reliability. It also becomes a key differentiator for the OEM, which can create new revenue streams with cloud-based services available to drivers after the sale of the vehicle. Another advantage for the OEM is that the decoupling encourages greater software reuse across different vehicles, enabling a single application to support many vehicle designs, with minimal adaptation for each one.

These trends will change the way in which software is created and maintained. One change is to shift left to complete software earlier in the product development lifecycle, even when prototype hardware is not available. The second is to stretch right to support the ability to update the vehicles after they are in the hands of the drivers, using over-the-air (OTA) updates to add functionality to the vehicles throughout their lifetime.

Though shifting left and stretching right may look as though they demand different approaches, their requirements largely coincide if development teams choose the right software development methodology (Figure 2). This methodology is built on top of the concept of continuous integration and continuous, or near-continuous, deployment. It is an approach that has been used successfully in the enterprise space.

Embedded systems, such as SDVs, are turning to many of the same supporting technologies like virtualization and the use of software containers to isolate software modules and abstract them from the underlying hardware. The approach also provides easier integration with the cloud-based processes that will be employed for many of the value-added services OEMs will offer. These services will often fuse the core car capabilities with AI and analytics hosted in the cloud.

The core change for embedded systems is to remove the need for prototyping on physical hardware or at least reduce it to the bare minimum to ensure that assumptions about timing and hardware behavior are realistic.

In the cloud environment, containerization has been an important element in the adoption of continuous integration and deployment methodologies. Containers reduce the hardware dependencies of an application. They achieve this by packaging applications alongside the support libraries and device drivers with which they were tested and by isolating them from the underlying operating system. In the embedded environment, an additional layer of isolation and protection is enabled by virtualization. Under virtualization, a hypervisor maps the I/O messages to the underlying hardware. The hypervisors management of the virtual platform also helps enforce secure isolation between functionally independent tasks running on the same processor complex.

Containerization will help boost flexibility and the ability of OEMs to deploy updates, particularly in parts of the system where OTA updates are likely to be frequent, such as the infotainment modules in the vehicle cabin. However, though they will be more decoupled, hardware interfaces and the dependencies they impose will remain vitally important to the functionality of the cars real-time control and safety systems. Developers will want to see how changes in interrupt rates and the flow of sensor data will affect the way their software responds. The answer lies in the digital twin.

A digital twin is a model of the target that replicates hardware and firmware behavior to the required level of detail needed for testing. The key advantage of the digital twin is that developers do not need to access hardware to perform most of their tests. The twin can run in desktop tools or cloud-based containers either in interactive debug mode or in highly automated regression test suites. The regressions perform a variety of tests that accelerate quality-control checks whenever changes are made. Increasingly, teams are making use of analytics and machine-learning techniques to home in on bugs faster and remove them from the build before they progress too far.

As each update is made, it can be tested against any other code modules or subsystems that might be affected to check whether the changes lead to unexpected interactions or problems. The digital twin does not entirely replace hardware in the project. Conventional hardware-in-the-loop tests will still be used to check the behavior of the digital twin simulations against real-world conditions. But once divergences in behavior are ironed out, the digital twin can be used extensively to support mid-life updates. The extensive pre-hardware tests, which can be run at speed in the cloud across multiple servers, will give OEMs the confidence to roll out OTA updates with new features as they are made ready.

The accuracy of the models used in the digital twin is important, though it is not necessary to use fully timing-accurate models for many of the tests. Highly detailed models with full-timing information typically run slower than fast models optimized for analyzing instruction throughput and application logic on the target processor and that can run close to real time on cloud-server hardware. Partitioning tests, so that only those component or subsystem models that need a fully detailed simulation are run in the digital twin, will optimize test time and streamline the verification process.

Models can be built by the OEMs and subsystem suppliers. However, the digital twin is an area where partnership with the right semiconductor vendors provides a significant advantage. Vendors have committed to developing models of their silicon platforms for as much as a year before the products are ready to ship to OEMs for assembly into prototypes and end products.

As well as supporting shift left lifecycle acceleration, models provide the ability for OEMs and subsystem providers to learn quickly how architectural innovations can benefit the target applications. An example is the magnetoresistive random-access memory (MRAM) that is set to appear in future automotive SoCs, providing a high-performance alternative to flash and a way of overcoming the problems of using volatile DRAM and SRAM for persistent data. A basic model may treat non-volatile memory like flash and MRAM as equivalent and make no distinction in latency or bandwidth: The models just reflect the non-volatile behavior.

The basic model can then be replaced by one with a higher level of detail that reflects the differences in write and read times and other aspects of behaviorfor example, the ability to easily access arbitrary locations without needing to cache the data in DRAM, or support for rapid writes that lends itself to frequent changes in algorithm parameters and leads to improvements in performance. Those differences can be leveraged by changes to the code base that take full advantage of the technology where it is available. As a result, by adopting a model-centric approach to development, software teams can help specify future hardware implementations to improve performance over several generations of a vehicle or other system.

Once embedded in the development flow, the methodologies underpinning stretch right will enable continuous improvements to product quality and, in turn, service revenue (Figure 3). The flow of data is not just to the vehicle in the form of OTA updates but also in the other direction: OEMs aim to collect sensor data from the running systems and apply it to a variety of machine-learning and analytics systems. The information feeding into the OEMs data lake can be filtered and applied to the digital twin simulation to gauge how well the different platforms are performing in the real world. In the virtual world of the digital twin, developers can make changes to settings and test new software functions to see how they perform against real-world data.

Improvements can then be tested in the regression environment before being deployed in a new OTA update. This closing of the loop between development and deployment will lead to a much faster cycle of product refinement, improving both existing products and future versions as new hardware is developed, allowing a further shift left for the next generation. It is a further demonstration of how a holistic approach to development, encompassing continuous integration and digital twins, can streamline product design and support. The advantages of this new methodology make the dual targets of shift left and stretch right not just possible but inevitable.

Read also:

Making SDVs Quantum-SecureReady

Brian Carlson and Joppe Bos of NXP Semiconductors explain the benefits of migrating to software-defined vehicle architectures and the future security threats posed by quantum computing technologies.

SDV Safety Calls for Partnerships, Open Source

Industry experts from Codasip, Elektrobit, TTTech Auto and the Eclipse Foundation discuss safety and security of self-driving software-defined vehicles as well as challenges that this concept represents for automakers.

Visit link:
'Shift Left and Stretch Right' Is the Future of SDVs - EE Times Europe

Read More..