Page 396«..1020..395396397398..410420..»

Infusion of Artificial Intelligence in Biology – The Scientist

In the early 1990s, protein biologists invested in solving a challenge that had riddled them for decades. The protein folding problem centered on the idea that biologists should be able to predict the three-dimensional structure of a protein based on its amino acid sequence, but they hadnt been able to do so in practice. Researchers knew that the ability to determine protein structure without relying on tedious experiments would unlock a plethora of applicationsbetter drug targets, easy protein function determination, and optimized industrial enzymesso they persisted.

In 1994, a few researchers led by biophysicist John Moult from the University of Maryland started the biannual Critical Assessment of Protein Structure Prediction (CASP) competition as a large-scale experiment to source solutions from the collective. At every event, the brightest minds in protein biology brought forth their models that predicted structures of a few test proteins chosen by the organizers. The model that yielded structures that most closely resembled experimental data won.

David Baker uses deep learning models to create de novo proteins that are better suited to solving modern problems than natural proteins.

Ian C Haydon

For the first several years, scientists relied on physical prediction models for these challenges, recalled David Baker, a protein design specialist at the University of Washington and a CASP competition contributor and advisor. Proteins are made out of amino acid residues, which are made out of atoms, and you try and model all the interactions between the atoms and how they drive the protein to fold up, Baker explained.

In 2018 at CASP13, the attendees witnessed a breakthrough. Demis Hassabis, cofounder and chief executive officer at DeepMind, an artificial intelligence company, and his team challenged the status quo by using a deep learning-based model to predict protein structure. They trained their model, AlphaFold, using the sequences and structures of about 100,000 known proteins to enable it to output pattern-recognition based predictions.1

AlphaFold won the competition that year, and the field progressed rapidly thereafter. By the next CASP meeting, the DeepMind team significantly improved their model, and AlphaFold predicted the structures of the majority of test proteins with an accuracy comparable to experimental methods.2 Based on AlphaFolds success, protein experts declared that the 50-year old protein folding problem was largely solved. AlphaFold inspired researchers to pivot towards AI for their protein folding models; Baker and his team soon launched their open source deep learning-based protein structure predictor RoseTTA fold.3

While these models successfully predicted the structures of almost all existing proteins, Baker was interested in proteins beyond the database, including proteins that did not exist.

AI accelerates protein design

Baker has always been interested in tinkering with proteins and especially in designing new ones. It wasn't too long after our first successes in structure prediction that we started thinking, well, maybe instead of predicting what structure a sequence would fold up to, we could use these methods to make a completely new structure and then find out what sequence could fold to it, he said.

Why is it that Netflix is able to give you recommendations for what movies you're going to like to watch tonight, but your clinician can't get you AI guided recommendations for therapies for how you should be treated? Trey Ideker, University of California San Diego

He and his team developed their first de novo protein, an alpha/beta protein called Top7, using physical modeling methods in 2003.4 Over the years, Bakers team and other researchers steadily expanded the list of de novo proteins.5 Now, with AI tools in their arsenal, researchers could design more complex proteins with a higher success rate, said Baker. Indeed, in the past few years, researchers, including Bakers team, have reported different protein design models.6,7 The team involved in developing one of these models, ProGen, used it to design synthetic enzymes, lysozymes, as a proof of concept.8 Experimental tests revealed that the artificial lysozymes showed catalytic efficiencies matching natural ones, demonstrating the prowess of such models in building utilitarian proteins in the lab.

The proteins in nature evolved under the constraints of natural selection. So, they solve all the problems that were relevant for natural selection during evolution. But now, we can make proteins specifically for 21st century problems. That is what is really exciting about the field, said Baker.

Using advanced machine learning tools, researchers can create artificial proteins with new functions.

Ian C Haydon

Bakers team is tackling several such needs-of-the-hour projects. He recently developed a de novo coronavirus vaccine in collaboration with Neil King, who specializes in protein design at the University of Washington.9 His team also works on targeted cancer drugs, enzymes that break down plastic, and proteins to fix carbon dioxide.

There is always more work to be done. Proteins in cells are often part of macromolecular complexes. Current AI models work well for protein folding predictions or creating a protein with a specific binding site, but they fall short when it comes to designing more complicated complexes, such as molecular motors. With the current methods, it's not so obvious how to design machines. That's still a research problem, said Baker.

Building bridges: AI models map cells

According toTrey Ideker, a computational biologist and functional genomics researcher at the University of California, San Diego, the AI-driven progress in protein folding was a huge milestone for biologists. That impact is still being felt, he said. But it solved just a small part of a complex problem.

With a goal of transforming precision medicine, Trey Ideker develops AI algorithms to analyze tumor genomes.

Trey Ideker

Proteins do not work alone; they interact with other proteins in intricate pathways to enable cellular function and structure. A deeper understanding of cell structure and its determinants will help researchers identify perturbations that indicate diseased states. While cell imaging provides a snapshot of cellular architecture, researchers are far from developing real cell maps and models, according to Ideker.

How do you Alphafold a cell? he questioned. How would you fold an entire cell for every cell in your body? Ideker intends to find the answers, and he has just the right resources to do so: a collaborative group of like-minded scientists.

As AI tools become more widespread in biology, many researchers have turned to deep learning models in their projects to improve precision medicine. With data at the crux of these models, it is vital to ensure that researchers have complete datasets to maximize their chances of success. With a goal of coordinating this progress, the NIH launched the Bridge2AI program with a focus on plugging in the key missing datasets that are needed to train future AI models to take them to the next level. It's not AI yet; it's the bridge to AI, said Ideker.

One focus project under this initiative is the Cell Maps for AI (CM4AI) program, which aims to build spatiotemporal maps of cells and connect genotype to phenotype to get a complete picture of cell health. The scientists involved in this program will achieve this by working on all aspects of cellular biology: genetic perturbations, cell imaging for morphology detection, and protein interaction studies. Ideker leads the functional genomics subgroup in the CM4AI program.

I'm actually optimistic we're going to get there relatively soon. But a lot of work remains and needs continued innovations in AI and data measurements, said Ideker.

Cellular image analysis: AI has an accurate eye

Maddison Masaeli and her team at Deepcell apply AI models to identify cell morphology aberrations in diseases.

Deepcell

Inferring cell health from structure and morphology is second nature for Maddison Masaeli, an engineer scientist and chief executive officer at Deepcell. The way that cells look has been integral to biology since the discovery of cells, she said. It goes all the way from getting a sense about how cells are doing in a culturewhether they're healthy and living and thrivingall the way to diagnosing and staging cancer in a pathology or cytology setting.

When Masaeli worked as a postdoctoral researcher for Euan Ashley, a cardiovascular expert at Stanford University, she studied cardiomyopathy models. Her work relied heavily on phenotypic analysis to determine cardiomyocyte maturity and function. The tools that we had available as scientists were extremely limited, even to the degree that we couldn't even measure a basic volume of cells, she said.

She sought to leverage computer vision and deep learning to help tackle those challenges, and after seeing their success, Masaeli cofounded Deepcell in 2017. She and her team developed an AI-based image analysis platform trained on large datasets of about two billion image data points gathered from cells originating from different tissues from both healthy people and patients with diseases.

According to Masaeli, their disease agnostic platform can detect abnormalities in the morphology of any cell type, which enables a wide range of applications in research and medicine. Some diseases have an obvious connection to cell morphology (for example, tumor cells structurally differ from healthy cells), but finding unexpected connections in other diseases excites Masaeli. For example, in one customer study on aging, the model picked up morphological differences between cells from old patients and those from young patients. After exposing the old-patient cells to drugs being tested to revert aging, Masaeli noted that the treated old-patient cells resembled the morphology of young-patient cells.

This is just fascinating [to find] the most non-obvious applications that could be very minute changes in morphology that we didn't have tools to evaluate directly, said Masaeli.

Predictive AI in precision medicine

While AI use cases have sprouted across diverse basic research areas, from single cell studies to neural network models that decode language, most researchers have their eyes on the prize: improving human health.

Nardin Nakhla and her team at Simmunome intend to fix drug discoverys leaky workflow using machine-learning models.

Claudia Grgoire

Nardin Nakhla, a neuroscientist and chief technology officer at Simmunome, intends to fix the leaky drug discovery pipeline. In the pharma industry, 90 percent of drugs fail, and only 10 percent make it all the way to the market. There's a lot of trial and error, said Nakhla.

A lot of work goes into drug screening and determining the right drug, but sometimes a drug doesnt work because the developers picked the wrong target or causal pathway. Nakhla and her team focus on the early stages of the workflow to minimize downstream losses. They trained their models on how biology works at the molecular level so that the models better understand pathways and can identify causal targets. The team can then simulate the downstream influence of a drug on a pathway and estimate its efficacy in stopping disease progression. The idea is to provide this tool, so instead of [drug developers] trying five times before they get it right, maybe we can get it right from the first or second time, said Nakhla.

In preliminary tests, the team compared the efficacies of drugs tested in 24 oncology clinical trials with prediction data from their simulations. They found that their models predicted drug efficacies with almost 70 percent accuracy. The Simmunome team intends to conduct more tests in the near future to ensure robust predictions in other disease areas.

Recent breakthroughs in machine learning allow scientists to create protein molecules unlike any found in nature.

Ian C Haydon

While Nakhla hopes to streamline conventional drug discovery processes, Ideker envisions a new world in medicine that includes customized patient therapies. A patient with breast cancer, for instance, may possess up to 50 genetic mutations that alter her response to standard medications. Given that genomic signatures differ between patients, researchers and physicians need the right combination of AI models and genomic data to appropriately treat such a complex perturbation of the system, according to Ideker. His team develops algorithms that could analyze a patients genomic mutations to inform the right treatment course.10

Essentially, what it's doing is determining or making a prediction on which drugs will produce a response to that patient, and which drugs are likely to not produce a response, said Ideker. In the future, as researchers build more sophisticated AI models, Ideker believes that there will be an armada of clinical trials where patients could avail themselves of personalized medicines catered to their genomes, maximizing the treatment response. Why is it that Netflix is able to give you recommendations for what movies you're going to like to watch tonight, but your clinician can't get you AI guided recommendations for therapies for how you should be treated? questioned Ideker.

AI advances: proceed with caution

Today, there is no dearth of appreciation for AI in biology from researchers, investors, and the public. That was not always the case. Ideker recalled that being an early bird in this field was frustrating due to the uphill climb of peer acceptance. If you've correctly identified what the gap is, and you are trying to push the field forward, there's always resistance, he said. Its been hard, but it should be.

Although Ideker is happy that biologists are finally warming up to AI, he thinks that some may have veered too far. The hype has gotten to a point where researchers cannot start a new venture without mentioning AI, he joked.

Everybody thinks that now they need to solve their problem one way or another with AI. And sometimes those problems might not be a great fit for AI and deep learning, agreed Masaeli, who experienced a similar skepticism-to-optimism journey. According to her, there is a lot that AI could help achieve in certain topics, but she urged researchers working in areas where large datasets arent available to evaluate existing tools rather than forcing AI-based approaches.

Whether researchers use AI methodologies or any other techniques, they need to possess a deep understanding of their topic to succeed, according to Baker. People were surprised that we transitioned so quickly from physically based models to deep learning models, he said. This was only possible because the researchers had worked on protein design for several years, understood the limitations and possibilities that came with the territory, and developed an intuition for the system, he explained. If you understand the scientific problem, then AI is just another tool.

See more here:
Infusion of Artificial Intelligence in Biology - The Scientist

Read More..

Anthropics launches artificial intelligence powered Zyler Virtual Try-On for Menswear solution Retail Technology … – Retail Technology Innovation…

Alexander Berend, CEO at Anthropics, says: "We believe that Zyler will transform the way men shop for clothes online. By combining advanced artificial intelligence technology with a user-friendly interface, we aim to provide an unparalleled virtual try-on experience.

This innovative solution not only sets retailers apart in a competitive market but is also a valuable tool for driving customer engagement, satisfaction and loyalty.

Expanding Zyler Virtual Try-On for menswear from an in-store installation, as, for example, currently used by Larusmiani an Italian bespoke menswear retailer to an online offering is a natural move, he adds.

In October of last year, Zyler reported on its work on the fashion rental site of UK retailer John Lewis, which is powered by HURR.

A Try it On feature on said site taps the companys technology.

Customers can see how an outfit looks on them online before they rent it. They upload a headshot and sizing information to try the outfit on virtually, from the comfort of their own home.

Key findings: over 30% of sales come from Zyler users; 16% of web visitors engage with Zyler; Average of 52 outfits viewed per user.

These findings exceeded our expectations and demonstrate a huge customer response to our try-on technology, commented Berend.

There is a significant sales contribution from Zyler users, strong engagement among website visitors, and a high number of outfits viewed per Zyler user.

Danielle Gagola, Innovation Lead at John Lewis, said: "Whatever special event they might be attending, at John Lewis we're always looking for ways to help our customers look and feel their best.

It has been so exciting to offer styling support in a digital environment using the Zyler technology, and the impressive results we've seen from the first few months shows it's resonating with our customers too.

As we move into winter, we're looking forward to even more customers using the Virtual Try-On service to find that perfect Christmas party outfit."

Here is the original post:
Anthropics launches artificial intelligence powered Zyler Virtual Try-On for Menswear solution Retail Technology ... - Retail Technology Innovation...

Read More..

Artificial intelligence: The future is already here, and businesses will have to play catch-up – The Irish Times

Data is the new oil, said Barry Scannell, an AI law expert with William Fry, and AI companies are going to be the new refineries.

He was addressing an audience of tech industry professionals in Trinity College at a summit on artificial intelligence.

What were doing now will have ripples for the future, Scannell continued, and as he spoke on a panel shared with OpenAI and Logitech, attendees diligently took notes.

Irelands 300-odd indigenous AI companies, more than half of them based in and around Dublin, and their multinational competitors have seen an explosion in interest in AI over the past 18 months after products such as ChatGPT opened the worlds eyes to the potential of the technology.

Tech companies big and small are scrambling to develop the best software to get an edge over rivals, and chip manufacturing companies, such as Nvidia, have seen huge market gains as they struggle to keep up with demand for microchips capable of running AI.

Generative AI will drive a paradigm shift in our interaction with technology, Googles Sebastian Haire told the Dublin summit, adding that the world was entering a fourth industrial revolution spearheaded by AI. He joked that even his presentation was out of date in the time between designing and delivering it.

If industry representatives are to be believed, virtually every company is either integrating some form of AI into its systems or is beginning an exploratory phase to assess how it can help their staff improve productivity. By 2025, worldwide spend on AI will reach $204 billion (188 billion), according to Haire. Thats next year.

It is expected that trillions will be pumped into the technology in the years to come.

At the Trinity College conference, AI experts mingled, trying each others branded cupcakes at industry stands, exchanging niceties and whatnot. But below the surface theres a race going on, and one on which nobody wants to fall behind.

For companies that miss the boat, theyre probably going to miss out on some significant productivity gains, said James Croke, business development officer at Version 1.

To get the most from AI, companies need to get their data right first. They need to migrate to the cloud. Its a challenge that they need to play catch-up on but when you look at the potential impact of AI for SMEs in particular, it could allow them to leapfrog rivals on productivity. This could be a once-in-a-generation chance for SMEs.

Markham Nolan, a former journalist and co-founder of Noan, which helps companies utilise artificial intelligence, said AI was a time machine for small businesses.

Our users save five to 10 hours a week by using AI, he said. If you get the prompts right, whatever AI creates will be 80-90 per cent to where it needs to be: you just have to do the polishing.

The AI has learned [our clients] brand. It has learned to speak for their brand. So rather than having to sit down and write an email from scratch, they just say, I need this email to be about this, to this company, and five seconds later theyll have a fully fleshed-out email in their voice.

So many businesses hit a hurdle, a point they cant get over because they dont have the revenue. AI is a bridging technology that allows people to add capacity without cost, and to grow to become the companies they have the potential to be, Nolan said.

Speaking to those attending the conference, you get the sense that AI no longer rests its promise upon pie-in-the-sky transformations in the future, but offers deliverable ones in the short term. Still, there are some innovations that remain difficult to picture in Ireland any time soon.

[AI-enabled software tool eases the burden on time-pressed project managers]

Matthew Nicholson, a researcher at Trinitys Adapt Centre, was showcasing Swedish company Furhats social robot. Looking a bit like a bust from Will Smiths I, Robot, with an internal projection displaying various facial expressions, it might act like a robot concierge in a hotel room, Nicholson suggested, though for now it seems a little too bizarre to wake up to.

Other AI applications were more obvious in their benefits for humanity. Unicefs AI lead, Dr Irina Mirkina, suggested that in the future AI will help predict cyclical natural disasters, disease outbreaks and how much aid is needed for emergencies at a pace far faster than any human can calculate.

Chris Hokamp, a scientist with Quantexa, sees a future in which there is likely to be another species of AIs.

Matthew Nicholson, a researcher at Trinitys Adapt centre, showcasing Swedish company Furhats social robot. Photograph: Conor Capplis

AI should remain a tool for humans for as long as possible, he said, adding: We dont want to give it emotions. Theres a curious casualness about sweeping predictions such as Hokamps. Its an almost unsurprising prediction in these circles nowadays.

He told the conference that humans must ensure that when AI reaches a superhuman level of intelligence, it must act ethically and adhere to regulations. He takes comfort in the knowledge that bad actors are unlikely to unleash a malevolent superintelligence on the world that they themselves would be unable to control.

Regulation was a key theme of the summit and there was much talk of the EUs proposed AI Act, which seeks for the first time to put in place a legal framework within which AI can operate in the bloc.

Onur Korucu, managing partner at GovernID, a privacy firm, said AI must be regulated in the EU not to stifle innovation but to put a frame around the innovation and democratise the use of AI.

Mark Kelly, the founder of AI Ireland, argued that a framework from which to develop AI would encourage companies to green-light new AI projects. His advice for Irish AI start-ups was not to go competing with the likes of OpenAI with video tools ... but if [a smaller Irish company] can go down and solve a niche industry-specific issue [you will succeed].

He spoke of one company that documented 20,000 client questions over the last 20 years, and created a language model around it to save time in answering queries. It led to a 25 per cent increase in clients within one year.

Skillnet Irelands Tracey Donnery said the number of women in AI is small but is increasing, and she appeared optimistic about AIs ability to augment and change jobs rather than solely replacing them. Hopefully it wont be as dramatic as described by the naysayers, she said.

There is, of course, also a dark side to AI. Dan Purcell, founder of Ceartas, an AI-powered company that takes down deepfakes from the internet, told the conference that sextortion is a growing issue, with young men increasingly accessing technology that uses AI to de-clothe women, an activity that also creates a larger data set for the software to improve its function.

On deepfakes, UCDs Dr Brendan Spillane believes the technology poses a serious risk to society including the integrity of elections. He said states are using deepfakes to sow social distrust and unrest and that it was becoming more common for states to outsource the service to private companies.

On the same subject, Rigr AI founder Edward Dixon helps law enforcement agencies around the world, including An Garda Sochna, to find the likely location of sensitive media. When police receive media depicting crimes against children, for example, they might receive hundreds of thousands of files.

Crimes like terrorism are noisy, public and generally have a lot of bystander imagery and accounts, he told The Irish Times. The kind of crimes we focus on happen quietly.

The companys tools suggest plausible locations based on the photo or videos environment, languages spoken, named mentioned ultimately saving hundreds of hours for investigators and potential psychological damage from examining sensitive media.

It is just one more example of how the long-predicted AI revolution now appears to be here. Whether you are worried, enthused or just baffled, it is hard not to feel as though the AI tide is coming in relentlessly.

Read the original post:
Artificial intelligence: The future is already here, and businesses will have to play catch-up - The Irish Times

Read More..

I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again – TechRadar

Pretty much anything we can do with AI today might have seemed like magic just a year ago, but MindStudio's platform for creating custom AI apps in a matter of minutes feels like a new level of alchemy.

The six-month-old free platform, which you can find right now under youai.ai, is a visual studio for building AI workflows, assistants, and AI chatbots. In its short lifespan it's already been used, according to CEO Dimitry Shapiro, to build more than 18,000 apps.

Yes, he called them "apps", and if you're struggling to understand how or why anyone might want to build AI applications, just look at OpenAI's relatively new GPT apps (aka GPTs). These let you lock the powerful GPT-3.5 into topic-based thinking that you can package up, share, and sell. Shapiro, however, noted the limits of OpenAI's approach.

He likened GPTs to "bookmarking a prompt" within the GPT sphere. MindStudio, on the other hand, is generative model-agnostic. The system lets you use multiple models within one app.

If adding more model options sounds complicated, I can assure you it's not. MindStudio is the AI development platform for non-developers.

To get you started, the company provides an easy-to-follow 18-minute video tutorial. The system also helps by offering a healthy collection of templates (many of them business-focused), or you can choose a blank template. I followed the guide to recreate the demo AI app (a blog post generator), and my only criticism is that the video is slightly out of date, with some interface elements having been moved or renamed. There are some prompts to note the changes, but the video could still do with a refresh.

Still, I had no trouble creating that first AI blog generator. The key here is that you can get a lot of the work done through a visual interface that lets you add blocks along a workflow and then click on them to customize, add details, and choose which AI model you want to use (the list includes GPT- 3.5 turbo, PaLM 2, Llama 2, and Gemini Pro). While you don't necessarily have to use a particular model for each task in your app, it might be that, for example, you should be using GPT-3.5 for fast chatbots or that PaLM would be better for math; however, MindStudio cannot, at least yet, recommend which model to use and when.

Image 1 of 2

The act of adding training data is also simple. I was able to find web pages of information, download the HTML, and upload it to MindStudio (you can upload up to 150 files on a single app). MindStudio uses the information to inform the AI, but will not be cutting and pasting information from any of those pages into your app responses.

Most of MindStudio's clients are in business, and it does hide some more powerful features (embedding on third-party websites) and models (like GPT 4 Turbo) behind a paywall, but anyone can try their hand at building and sharing AI apps (you get a URL for sharing).

Confident in my newly acquired, if limited, knowledge, I set about building an AI app revolving around mobile photography advice. Granted, I used the framework I'd just learned in the AI blog post generator tutorial, but it still went far better than I expected.

One of the nice things about MindStudio is that it allows for as much or as little coding as you're prepared to do. In my case, I had to reference exactly one variable that the model would use to pull the right response.

There are a lot of smart and dead-simple controls that can even teach you something about how models work. MindStudio lets you set, for instance, the 'Temperature' of your model to control the randomness of its responses. The higher the 'temp', the more unique and creative each response. If you like your model verbose, you can drag another slider to set a response size of up to 3,000 characters.

The free service includes unlimited consumer usage and messages, some basic metrics, and the ability to share your AI via a link (as I've done here). Pro users can pay $23 a month for the more powerful models like GPT-4, less MindStudio branding, and, among other things, site embedding. The $99 a-month tier includes all you get with Pro, but adds the ability to charge for access to your AI app, better analytics, API access, full chat transcripts, and enterprise support.

Image 1 of 2

I can imagine small and medium-sized businesses using MindStudio to build customer engagement and content capture on their sites, and even as a tool for guiding users through their services.

Even at the free level, though, I was surprised at the level of customization MindStudio offers. I could add my own custom icons and art, and even build a landing page.

I wouldn't call my little AI app anything special, but the fact that I could take the germ of an idea and turn it into a bespoke chatbot in 10 minutes is surprising even to me. That I get to choose the right model for each job within an AI app is even better; and that this level of fun and utility is free is the icing on the cake.

See the original post here:
I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again - TechRadar

Read More..

We’re already using AI more than we realize – Vox.com

Were living through an inflection point for artificial intelligence: From generated images and video to advanced personal assistants, a new frontier of technologies promises to fundamentally change how we live, work, and play.

And yet for all the buzz and concerns about how AI will change the world, in many ways, it already has.

From spam filters and sentence suggestions in our email inboxes to voice assistants and fitness tracking built into our phones, countless machine learning tools have quietly weaved their way into our everyday lives. But when were surveyed about which everyday technologies use artificial intelligence and which dont, we arent particularly good at knowing the difference. Does that matter?

You can find this video and all of Voxs videos on YouTube.

This video is sponsored by Microsoft Copilot for Microsoft 365. Microsoft has no editorial influence on our videos, but their support makes videos like this possible.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Go here to see the original:
We're already using AI more than we realize - Vox.com

Read More..

Apple says it’ll ‘break new ground’ in generative AI here’s what to expect – TechRadar

Tim Cook has spent the last few months preparing the Apple faithful for the company's first foray into generative artificial intelligence (AI), and has repeatedly said the company is on the brink of launching its own AI tool to compete with the likes of ChatGPT. Now, the Apple boss has just dropped another huge hint of what we can expect to see.

Speaking at Apples annual shareholders meeting (via MacRumors), Cook claimed that the company will break new ground in the field of generative AI and this wasnt some vague pledge that well have to wait years to see come to fruition, as Cook said its going to happen this year.

In early February 2024, the Apple CEO said there was a huge opportunity for Apple with generative AI. In his most recent call, he was in a similarly bullish mood when he announced that We believe it will unlock transformative opportunities for our users.

Of course, Cook didnt go into detail on what exactly those opportunities will be no doubt he would rather wait until Apples Worldwide Developers Conference (WWDC) in June to reveal that. But there are plenty of areas where we expect to see generative AI make its mark on Apples products.

Much like how Microsoft has infused its Copilot AI into its suite of homegrown apps, we expect Apple will do something similar with its own AI model. Apps like Pages and Numbers could get a virtual assistant to make your work easier, while Apple Music might get a virtual DJ similar to what Spotify currently offers. This is all speculation at the moment, but its not hard to see how Apple might follow its rivals in these areas.

Theres a different rumor that has a bit more corroboration, though: AI could soon enhance Siri. Apples assistant has been lagging behind competitors for years now, and generative AI could give it the shot in the arm it so desperately needs. According to Bloomberg's Mark Gurman who has previously leaked highly accurate information about Apples future plans Siri could get a big upgrade with this years iOS 18, which might be one of the biggest iOS updates if not the biggest in the company's history.

That means Apple has a chance to finally catch up with its rivals when it comes to generative AI, and put right years of poor Siri performance. With just a few months to go until iOS 18 debuts at WWDC, were waiting with bated breath.

The rest is here:
Apple says it'll 'break new ground' in generative AI here's what to expect - TechRadar

Read More..

David Mongeau to step down, interim director for country’s only HSI data science school announced – The University of Texas at San Antonio

A nationally recognized leader in the data science and artificial intelligence community, Mongeau brought to UTSA a distinguished record in leading research institutes and training programs, as well as in developing partnerships across government, industry, academia and the philanthropic community.

Under his leadership, the School of Data Science has recorded numerous achievements, including receiving $1.2 million in gift funding for data science, AI and machine learning student training and research programs. In addition to the undergraduate and graduate degree and certificate programs comprising the School of Data Science, school leaders now are developing a new certificate program in data engineering.

In partnership with the Association of Computing Machinery at UTSA and the National Security Agency, the School of Data Science in 2022 launched the annual Rowdy Datathon competition.

In April 2023, the school hosted the inaugural UTSA Draper Data Science Business Plan Competition, which highlights data science applications and student entrepreneurship; the second annual competition will be held at San Pedro I later this spring.

Also in 2023, the school hosted its inaugural Los Datos Conference. The school also now serves as administrative host to the universitys annual RowdyHacks competition; more than 500 students from across Texas participated in the 9th annual RowdyHacks at San Pedro I last weekend.

Mongeau has been driven to increase the reach and reputation of the School of Data Science from and back to San Antonio. In October 2023, the School of Data Science hosted the annual meeting of the Academic Data Science Alliance, bringing together more than 200 data science practitioners, researchers and educators from across the country to UTSA. The school also invested nearly $400,000 to create opportunities for UTSA students and faculty to pursue projects and participate in national data science and AI experiences at, for example, University of Chicago, University of Michigan, University of Washington, and the U.S. Census Bureau.

Through a collaboration with San Antonio-based start-up Skew the Script, the school has reached 20,000 high school teachers and 400,000 high school students with open-source training in statistics and math, which are core to success in data science and AI.

I consider myself so fortunate to have been part of the creation of the School of Data Science at UTSA, said Mongeau. I thank the schools dedicated staff and core faculty for their commitment to the school which is having an enduring impact on our students the next generation of diverse data scientists who have embraced the schools vision to make our world more equitable, informed and secure. These Roadrunners are destined to become industry leaders and continue to advance the frontiers of data science and AI.

Immediately prior to joining UTSA, Mongeau served as executive director of the Berkeley Institute for Data Science at the University of California, Berkeley. As executive director, he set the strategic direction for the institute, expanded industry and foundation engagement, and applied data science and AI in health care, climate change, and criminal justice.

Notably, he also initiated three data science fellowship programs and forged partnerships to enhance opportunities for legal immigrants and refugees in data science careers.

Visit link:

David Mongeau to step down, interim director for country's only HSI data science school announced - The University of Texas at San Antonio

Read More..

How Google Used Your Data to Improve their Music AI – Towards Data Science

MusicLM fine-tuned on user preferences 7 min read

MusicLM, Googles flagship text-to-music AI, was originally published in early 2023. Even in its basic version, it represented a major breakthrough and caught the music industry by surprise. However, a few weeks ago, MusicLM received a significant update. Heres a side-by-side comparison for two selected prompts:

Prompt: Dance music with a melodic synth line and arpeggiation:

Prompt: a nostalgic tune played by accordion band

This increase in quality can be attributed to a new paper by Google Research titled: MusicRL: Aligning Music Generation to Human Preferences. Apparently, this upgrade was considered so significant that they decided to rename the model. However, under the hood, MusicRL is identical to MusicLM in its key architecture. The only difference: Finetuning.

When building an AI model from scratch, it starts with zero knowledge and essentially does random guessing. The model then extracts useful patterns through training on data and starts displaying increasingly intelligent behavior as training progresses. One downside to this approach is that training from scratch requires a lot of data. Finetuning is the idea that an existing model is used and adapted to a new task, or adapted to approach the same task differently. Because the model already has learned the most important patterns, much less data is required.

For example, a powerful open-source LLM like Mistral7B can be trained from scratch by anyone, in principle. However, the amount of data required to produce even remotely useful outputs is gigantic. Instead, companies use the existing Mistral7B model and feed it a small amount of proprietary data to make it solve new tasks, whether that is writing SQL queries or classifying emails.

The key takeaway is that finetuning does not change the fundamental structure of the model. It only adapts its internal logic slightly to perform better on a specific task. Now, lets use this knowledge to understand how Google finetuned MusicLM on user data.

A few months after the MusicLM paper, a public demo was released as part of Googles AI Test Kitchen. There, users could experiment with the text-to-music model for free. However, you might know the saying: If the product is free, YOU are the product. Unsurprisingly, Google is no exception to this rule. When using MusicLMs public demo, you were occasionally confronted with two generated outputs and asked to state which one you prefer. Through this method, Google was able to gather 300,000 user preferences within a couple of months.

As you can see from the screenshot, users were not explicitly informed that their preferences would be used for machine learning. While that may feel unfair, it is important to note that many of our actions in the internet are being used for ML training, whether it is our Google search history, our Instagram likes, or our private Spotify playlists. In comparison to these rather personal and sensitive cases, music preferences on the MusicLM playground seem negligible.

It is good to be aware that user data collection for machine learning is happening all the time and usually without explicit consent. If you are on Linkedin, you might have been invited to contribute to so-called collaborative articles. Essentially, users are invited to provide tips on questions in their domain of expertise. Here is an example of a collaborative article on how to write a successful folk song (something I didnt know I needed).

Users are incentivized to contribute, earning them a Top Voice badge on the platform. However, my impression is that noone actually reads these articles. This leads me to believe that these thousands of question-answer pairs are being used by Microsoft (owner of Linkedin) to train an expert AI system on these data. If my suspicion is accurate, I would find this example much more problematic than Google asking users for their favorite track.

But back to MusicLM!

The next question is how Google was able to use this massive collection of user preferences to finetune MusicLM. The secret lies in a technique called Reinforcement Learning from Human Feedback (RLHF) which was one of the key breakthroughs of ChatGPT back in 2022. In RLHF, human preferences are used to train an AI model that learns to imitate human preference decisions, resulting in an artificial human rater. Once this so-called reward model is trained, it can take in any two tracks and predict which one would most likely be preferred by human raters.

With the reward model set up, MusicLM could be finetuned to maximize the predicted user preference of its outputs. This means that the text-to-music model generated thousands of tracks, each track receiving a rating from the reward model. Through the iterative adaptation of the model weights, MusicLM learned to generate music that the artificial human rater likes.

In addition to the finetuning on user preferences, MusicLM was also finetuned concerning two other criteria: 1. Prompt Adherence MuLan, Googles proprietary text-to-audio embedding model was used to calculate the similarity between the user prompt and the generated audio. During finetuning, this adherence score was maximized. 2. Audio Quality Google trained another reward model on user data to evaluate the subjective audio quality of its generated outputs. These user data seem to have been collected in separate surveys, not in MusicLMs public demo.

The new, finetuned model seems to reliably outperform the old MusicLM, listen to the samples provided on the demo page. Of course, a selected public demo can be deceiving, as the authors are incentivized to showcase examples that make their new model look as good as possible. Hopefully, we will get to test out MusicRL in a public playground, soon.

However, the paper also provides a quantitative assessment of subjective quality. For this, Google conducted a study and asked users to compare two tracks generated for the same prompt, giving each track a score from 1 to 5. Using this metric with the fancy-sounding name Mean Opinion Score (MOS), we can compare not only the number of direct comparison wins for each model, but also calculate the average rater score (MOS).

Here, MusicLM represents the original MusicLM model. MusicRL-R was only finetuned for audio quality and prompt adherence. MusicRL-U was finetuned solely on human feedback (the reward model). Finally, MusicRL-RU was finetuned on all three objectives. Unsurprisingly, MusicRL-RU beats all other models in direct comparison as well as on the average ratings.

The paper also reports that MusicRL-RU, the fully finetuned model, beat MusicLM in 87% of direct comparisons. The importance of RLHF can be shown by analyzing the direct comparisons between MusicRL-R and MusicRL-RU. Here, the latter had a 66% win rate, reliably outperforming its competitor.

Although the difference in output quality is noticeable, qualitatively as well as quantitatively, the new MusicLM is still quite far from human-level outputs in most cases. Even on the public demo page, many generated outputs sound odd, rhythmically, fail to capture key elements of the prompt or suffer from unnatural-sounding instruments.

In my opinion, this paper is still significant, as it is the first attempt at using RLHF for music generation. RLHF has been used extensively in text generation for more than one year. But why has this taken so long? I suspect that collecting user feedback and finetuning the model is quite costly. Google likely released the public MusicLM demo with the primary intention of collecting user feedback. This was a smart move and gave them an edge over Meta, which has equally capable models, but no open platform to collect user data on.

All in all, Google has pushed itself ahead of the competition by leveraging proven finetuning methods borrowed from ChatGPT. While even with RLHF, the new MusicLM has still not reached human-level quality, Google can now maintain and update its reward model, improving future generations of text-to-music models with the same finetuning procedure.

It will be interesting to see if and when other competitors like Meta or Stability AI will be catching up. For us as users, all of this is just great news! We get free public demos and more capable models.

For musicians, the pace of the current developments may feel a little threatening and for good reason. I expect to see human-level text-to-music generation in the next 13 years. By that, I mean text-to-music AI that is at least as capable at producing music as ChatGPT was at writing texts when it was released. Musicians must learn about AI and how it can already support them in their everyday work. As the music industry is being disrupted once again, curiosity and flexibility will be the primary key to success.

Read the rest here:

How Google Used Your Data to Improve their Music AI - Towards Data Science

Read More..

4 Emerging Strategies to Advance Big Data Analytics in Healthcare – HealthITAnalytics.com

February 28, 2024 -While the potential for big data analytics in healthcare has been a hot topic in recent years, the possible risks of using these tools have received just as much attention.

Big data analytics technologies have demonstrated their promise in enhancing multiple areas of care, from medical imaging and chronic disease management to population health and precision medicine. These algorithms could increase the efficiency of care delivery, reduce administrative burdens, and accelerate disease diagnosis.

But despite all the good these tools could achieve, the harm these algorithms could cause is nearly as significant.

Concerns about data access and collection, implicit and explicit bias, and issues with patient and provider trust in analytics technologies have hindered the use of these tools in everyday healthcare delivery.

Healthcare researchers and provider organizations are working to solve these issues, facilitating the use of big data analytics in clinical care for better quality and outcomes.

READ MORE: Data Analytics in Healthcare: Defining the Most Common Terms

In this primer, HealthITAnalytics will explore how improving data quality, addressing bias, prioritizing data privacy, and building providers trust in analytics tools can advance the four types of big data analytics in healthcare.

In healthcare, its widely understood that the success of big data analytics tools depends on the value of the information used to train them. Algorithms trained on inaccurate, poor-quality datacan yield erroneous results, leading to inadequate care delivery.

However, obtaining quality training data is complex and time-intensive, leaving many organizations without the resources to build effective models.

Researchers across the industry are working to overcome this challenge.

In 2019, a team from MITs Computer Science and Artificial Intelligence Library (CSAIL)developedan automated system to gather more data from images to train machine learning models, synthesizing a massive dataset of distinct training examples.

READ MORE: Breaking Down the 4 Types of Healthcare Big Data Analytics

This approach is beneficial for use cases in which high-quality images are available, but there are too few to develop a robust dataset. The synthesized dataset can be used to improve the training of machine learning models, enabling them to detect anatomical structures in new scans.

This image segmentation approach helps address one of the major data quality issues: insufficient data points.

But what about cases with a wealth of relevant data but varying qualities or data synthetization challenges?

In these cases, its useful to begin by defining and exploring some common healthcare analytics concepts.

Data quality, as the name suggests, is a way to measure the reliability and accuracy of the data. Addressing quality is critical to healthcare data generation, collection, and processing.

READ MORE: Top Data Analytics Tools for Population Health Management

If the data collection process yielded a sufficient number of data points but there is a question of quality, stakeholders can look at the datas structure and identify whether converting the structure of the datasets into a common format is appropriate. This is known as data standardization, and it can help ensure that the data are consistent, which is necessary for effective analysis.

Data cleaning flagging and addressing data abnormalities and data normalization, the process of organizing data, can take standardization even further.

Tools like the United States Core Data for Interoperability (USCDI) and USCDI+ can help in cases where a healthcare organization doesnt have enough high-quality data.

In scenarios with a large amount of data, synthesizing the data for analysis creates another potential hurdle.

As seen throughout the COVID-19 pandemic, when data related to the virus became available globally, healthcare leaders faced the challenge of creating high-quality datasets to help researchers answer vital questions about the virus.

In 2020, the White House Office of Science and Technology Policyissueda call to action for experts to synthesize an artificial intelligence (AI) algorithm-friendly COVID-19 dataset to bolster these efforts.

The dataset represents an extensive machine-readable coronavirus literature collection including over 29,000 articles at the time of creation designed to help researchers sift through and analyze the data more quickly.

By promoting collaboration among researchers, healthcare institutions, and other stakeholders, initiatives like this can support the efficient synthesis of large-scale, high-quality datasets.

As healthcare organizations become increasingly reliant on analytics algorithms to help them make care decisions, bias is a major hurdle to the safe and effective deployment of these tools.

Tackling algorithmic bias requires stakeholders to be aware of how biases are introduced and reproduced at every stage of algorithm development and deployment. In many algorithms, bias can be baked in almost immediately if the developers rely on biased data.

The US Department of Health and Human Services (HHS) Office of Minority Health (OMH) indicates that lack of diversity in an algorithms training data is a significant source of bias. Further, bias can be coded into algorithms based on developers beliefs or assumptions, including implicit and explicit biases.

If, for example, a developer incorrectly assumes that symptoms of a particular condition are more common or severe in one population than another, the resulting algorithm could be biased and perpetuate health disparities.

Some have suggested that bringing awareness to potential biases can remedy the issue of algorithmic bias, but research suggests that a more robust approach is required. One study published in the Future Healthcare Journal in 2021 demonstrated that while bias training can help individuals recognize biases in themselves and others, it is not an effective debiasing strategy.

The OMH recommends best practices beyond bias training, encouraging developers to work with diverse stakeholders to ensure that algorithms are adequately developed, validated, and reviewed to maximize utility and minimize harm.

In scenarios where diverse training data for algorithms is unavailable, techniques like synthetic data can help minimize potential biases.

In terms of algorithm deployment and monitoring, the OMH suggests that the tools should be implemented gradually and that users should have a way to provide feedback to the developers for future algorithm improvement.

To this end, developers can work with experts and end-users to understand what clinical measures are important to providers, according to researchers from the University of Massachusetts Amherst.

In recent years, healthcare stakeholders have increasingly developed frameworks and best practices to minimize bias in clinical algorithms.

A panel of experts convened by the Agency for Healthcare Research and Quality (AHRQ) and the National Institute on Minority Health and Health Disparities (NIMHD) published a special communications article in the December 2023 issue of JAMA Network Open outlining five principles to address the impact of algorithm bias on racial and ethnic disparities in healthcare.

The framework guides healthcare stakeholders to mitigate and prevent bias at each stage of an algorithms life cycle by promoting health equity, ensuring algorithm transparency, earning trust by engaging patients and communities, explicitly identifying fairness issues, and establishing accountability for equity and fairness in outcomes from algorithms.

When trained using high-quality data and deployed in settings that will be monitored and adjusted to minimize biases, algorithms can help address disparities in maternal health, preterm births, and social determinants of health (SDOH).

In algorithm development, data privacy and security are high on the list of concerns. Legal, privacy, and cultural obstacles can keep researchers from accessing the large, diverse data sets needed to train analytics technologies.

Over the years, experts have worked to craft approaches that can balance the need for data access against the need to protect patient privacy.

In 2020, a team from the University of Iowa (UI) set out to develop a solution to this problem. With a $1 million grant from the National Science Foundation (NSF), UI researcherscreateda machine learning platform to train algorithms with data from around the world.

The tool is a decentralized, asynchronous solution called ImagiQ, and it relies on an ecosystem of machine learning models so that institutions can select models that work best for their populations. Using the platform, organizations can upload and share the models, but not patient data, with each other.

The researchers indicated that traditional machine learning methods require a centralized database where patient data can be directly accessed for use in model training, but these approaches are often limited by practical issues like information security, patient privacy, data ownership, and the burden on health systems tasked with creating and maintaining those centralized databases.

ImagiQ helps overcome some of these challenges, but it is not the only framework to do so.

Researchers from the University of Pittsburgh Swanson School of Engineering were awarded $1.7 million from the National Institutes of Health (NIH) in 2022 to advance their efforts to develop a federated learning (FL)-based approach to achieve fairness in AI-assisted medical screening tools.

FL is a privacy-protection method that enables researchers to train AI models across multiple decentralized devices or servers holding local data samples without exchanging them.

The approach is useful for improving model performance without compromising data privacy, as AI trained on one institutions data typically does not generalize well on data from another.

However, FL is not a perfect solution, as experts from the University of Southern California (USC) Viterbi School of Engineering pointed out at the 2023 International Workshop on Health Intelligence. They posited that FL brings forth multiple concerns, such as its ability to make predictions based on what its learned from its training data and the hurdles presented by missing data and the data harmonization process.

The research team presented a framework for addressing these challenges, but there are other tools healthcare stakeholders can use to prioritize data privacy, such as confidential computing or blockchain. These tools center on making the data largely inaccessible and resistant to tampering by unauthorized parties.

Alternatives that do not require significant investments in cloud computing or blockchain are also available to stakeholders through privacy-enhancing technologies (PETs), three of which are particularly suited to healthcare use cases.

Algorithmic PETs like encryption, differential privacy, and zero-knowledge proofs protect data privacy by altering how the information is represented while ensuring it is usable. Often, this involves modifying the changeability or traceability of healthcare data.

In contrast, architectural PETs focus on the structure of data or computation environments, rather than how those data are represented, to enable users to exchange information without exchanging any underlying data. Federated learning, secure multi-party computation, and blockchain fall into this PET category.

Augmentation PETs, as the name suggests, augment existing data sources or create fully synthetic ones. This approach can help enhance the availability and utility of data used in healthcare analytics projects. Digital twins and generative adversarial networks are commonly used for this purpose.

But even the most robust data privacy infrastructure cannot compensate for a lack of trust in big data analytics tools.

Just as patients need to trust that analytics algorithms can keep their data safe, providers must trust that these tools can deliver information in a functional, reliable way.

The issue of trustworthy analytics tools has recently taken center stage in conversations around how Americans interact with AI knowingly and unknowingly in their daily lives. Healthcare is one of the industries where advanced technologies present the most significant potential for harm, leading the federal government to begin taking steps to guide the deployment and use of algorithms.

In October 2023, President Joe Biden signed theExecutive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which outlines safety, security, privacy, equity, and other standards for how industry and government should approach AI innovation.

The orders directives are broad, as they are designed to apply to all US industries, but it does lay out some industry-specific directives for those looking at how it will impact healthcare. Primarily, the executive order provides a framework for creating standards, laws, and regulations around AI and establishes a roadmap of subsequent actions that government agencies, like HHS, must take to build such a framework.

However, this process will take months, and more robust regulation of healthcare algorithms could take even longer, leading industry stakeholders to develop their own best practices for using analytics technologies in healthcare.

One stakeholder is the National Academy of Medicine (NAM) Artificial Intelligence Code of Conduct (AICC), which represents a collaborative effort among healthcare, research, and patient advocacy groups to create a national architecture for responsible AI use in healthcare.

In a 2024 interview with HealthITAnalytics, NAM leadership emphasized that this governance infrastructure is necessary to gain trust and improve healthcare as advanced technologies become more ubiquitous in care settings.

However, governance structure must be paired with education and clinician support to obtain buy-in from providers.

Some of this can start early, as evidenced by recent work from the University of Texas (UT) health system to incorporate AI training into medical school curriculum. Having staff members dedicated to spearheading analytics initiatives, such as a chief analytics officer, is another approach that healthcare organizations can use to make providers feel more comfortable with these tools.

These staff can also work to bolster trust at the enterprise level by focusing on creating a healthcare data culture, gaining provider buy-in from the top down, and having strategies to address concerns about clinician overreliance on analytics technologies.

With healthcare organizations increasingly leveraging big data analytics tools for enhanced insights and streamlined care processes, overcoming data quality, bias, privacy, and security issues and fostering user trust will be critical for successfully using these models in clinical care.

As research evolves around AI, machine learning, and other analytics algorithms, the industry will keep refining these tools for improved patient care.

Follow this link:

4 Emerging Strategies to Advance Big Data Analytics in Healthcare - HealthITAnalytics.com

Read More..

Computer scientist traces her trajectory from stunt flying to a startup – GeekWire

Computer scientist Cecilia Aragon tells her life story at the Womens Leadership Conference, presented by the Bellevue Chamber. (GeekWire Photo / Alan Boyle)

BELLEVUE, Wash. Three decades ago, Cecilia Aragon made aviation history as the first Latina to earn a place on the U.S. Unlimited Aerobatic Team. She went on to write a book about it, titled Flying Free.

Today, shes still flying free, as a professor and data scientist in the University of Washington and as the co-founder of a Seattle startup that aims to commercialize her research.

Aragon recounted her personal journey today during a talk at the Womens Leadership Conference, presented by the Bellevue Chamber. The conference brought nearly 400 attendees to Bellevues Meydenbauer Center to hear about topics ranging from financial literacy to sports management.

Aragons aerobatic days began in 1985, when she accepted an invitation from a co-worker to take a ride in his flying clubs Piper Cherokee airplane.

The first thing I thought was, Im the person whos scared of climbing a stepladder. Im scared of going in an elevator,' she recalled.

But then she thought of her Chilean-born father. I heard my fathers voice, saying, What is stopping you from doing whatever you want? she said. She swallowed her fears, climbed into the plane, and was instantly hooked.

Its so gorgeous to fly out into the water and see the sun glinting up on the water, like a million gold coins, she said. And when we got down to the ground, I said, I want to take flying lessons. I want to be the pilot of my own life.'

Aragon said she went through three flight instructors, but gradually overcame her fears. I learned to turn fear into excitement, she said. The excitement reached its peak in 1991 when she was named to the U.S. aerobatic team and went on to win bronze medals at the U.S. national and world aerobatic championships.

That wasnt the only dream that Aragon has turned into reality. After leaving the aerobatic team, she worked as a computer scientist at NASAs Ames Research Center in Silicon Valley, earned her Ph.D. at Berkeley and became a staff scientist at Lawrence Berkeley National Laboratory. Aragon joined UWs faculty in 2010 and is now the director of the universitys Human-Centered Data Science Lab.

I love it, she said. My students amaze me and excite me every single day.

Aragons research focuses on how people make sense of vast data sets, using computer algorithms and visualizations. She holds several patents relating to visual representations of travel data and with the help of UWs CoMotion Labs and Mobility Innovation Center, Aragon and her teammates have turned that data science into a startup called Traffigram.

For the past year, Traffigrams small team has been working in semi-stealth mode to develop software that can analyze multiple travel routes, determine the quickest way to get from Point A to Point B, and present the information in an easy-to-digest format. Aragon is the ventures chief scientist and her son, Ken Aragon, is co-founder and CEO.

Its a family business, she told GeekWire. Weve gotten a great response from potential customers so far, and weve raised some money.

So how does creating a startup compare with aerobatic stunt flying?

I think there are a lot of similarities, because its very risky, Aragon said. As they have told me many times, most startup businesses fail. You know, thats just like what they told me with aerobatics that very few people make the U.S. aerobatic team, and its probably not going to happen. I said, Yeah, but Im going to enjoy the path I believe in. So I believe in the mission we have, to make transportation more accessible to everyone.

Originally posted here:

Computer scientist traces her trajectory from stunt flying to a startup - GeekWire

Read More..