Page 1,564«..1020..1,5631,5641,5651,566..1,5701,580..»

‘The new Excel’: MBA students flock to machine learning course – University of Toronto

With recent instability in some U.S. banks and the crypto winter that began last year, experts say itsmore important than everfor financeprofessionals to understand the innovations andchallenges in the sector.

The world is changing quickly, and so too are the skills needed to thrive, saysJohn Hull, a University Professorof finance at the University of Torontos Rotman School of Management.

Hull is the academic director of theRotman Financial Innovation Hub (FinHub), which is designed to help fintech practitioners, students and faculty to share insights and equip students with best-in-class knowledge of financial innovation. He created the hub five years ago withAndreas Park, professor of finance at U of TMississauga, andthe latePeter Christoffersen, who was a professor of finance at Rotman.

We recognized there were lots of things happening in the financial sector that are transformative and different, and we wanted to develop the knowledge base and pass it on to the students so they can compete in this space, says Park, who has a cross-appointment to Rotman.

Each year, students can take courses taught by FinHub-affiliated faculty. That includesHull and Park, who offer courses on machine learning, blockchain, decentralized finance and financial market trading.

One of the most in-demand MBA electives ismachine learning and financial innovation, which introduces students to the tools of machine learning. A similar course is compulsory for students in the master of financial risk management and master of finance programs.

Students are required to learn Python in the course, with Hull calling the programming language the new Excel as it becomes a common requirement for many jobs in finance.

Ive met traders in their 40s who go and learn Python because it simplifies their workflow, says Park. Its all about inferring data and making sense of it, and then predicting future data using machine learning tools. And to do that, you need to learn Python.

The machine learning course is offered to full-time MBA students in March and April of their first year. Its also available as an elective in their second year.

Many MBA students get involved in machine learning as part of their summer internship, so it's important to give them an opportunity to familiarize themselves with machine learning and Python applications prior to that time, says Hull.

MBA student Cameron Thompsontook the course prior to an internship at Boston Consulting Group (BCG) and says the hands-on practice in class was invaluable, with or without an extensive background in computer programming.

Being familiar with common machine learning terminology from day one on the job was quite useful, says Thompson, who will be returning to BCG full-time following graduation. The course builds a solid foundation for using data in a strategic wayand then adds the machine learning content its hard to go anywhere without seeing an application.

In his second year, Thompson pursued an independent FinHub study project sponsored by the Bank of Canada that involvedworking with researchers from Rotman and the Faculty of Applied Science & Engineering on a natural language processing model.

MBA gradFengmin Weng, whotook the elective course with Hull, says the insights from class prepared her to lead a machine learning project at TD.

Machine learning is definitely the trend in the financial industry, particularly in the risk management area, says Weng, who came from an accounting background when she pursued the master of financial risk management program.

It definitely helps us to make better decisions around our strategy, she says. If you want to develop your career in the risk area, machine learning is your weapon.

Richard Liu, who received his MBA from Rotman threeyears ago, saysthe machine learning course was one of the most eye-opening parts of his MBA experience. Today, he says he uses many of the concepts from the course in his work as a financial planner.

Im able to recognize when its more effective to train computers to enhance our work, how to coexist withrobo-advisors and how to automate some of our financial planning processes, says Liu.

Students involved with FinHub courses are equipped with the tools to think critically about the implications and benefits of emerging technologies in the financial sector, says Park, adding thattheyre able to enter an organization and use these tools to help improve processes and strategies.

Hull, meanwhile, says student who take the course gain insight into the direction the finance world is heading namely that machine learning is becoming more and more important in business."

Read more here:
'The new Excel': MBA students flock to machine learning course - University of Toronto

Read More..

DeepMind’s AI used to develop tiny ‘syringe’ for injecting gene therapy and tumor-killing drugs – Livescience.com

Scientists have developed a molecular "syringe" that can inject proteins, including cancer-killing drugs and gene therapies, directly into human cells.

And the researchers did it using an artificial intelligence (AI) program made by Google's DeepMind. The AI program, called AlphaFold, previously predicted the structure of nearly every protein known to science.

The team modified a syringe-like protein naturally found in Photorhabdus asymbiotica, a species of bacteria that primarily infects insects. The modified syringe, which was described Wednesday (March 29) in the journal Nature (opens in new tab), has not yet been tested in humans, only in lab dishes and live mice.

But experts say, eventually, the syringe could have medical applications.

"The authors show that this approach can be tuned to target specific cells and to deliver customized protein cargoes (payloads)," Charles Ericson (opens in new tab) and Martin Pilhofer (opens in new tab), who study bacterial cell-cell interactions at ETH Zrich in Switzerland and were not involved in the research, wrote in an accompanying commentary (opens in new tab). "These re-engineered injection complexes represent an exciting biotechnological toolbox that could have applications in various biological systems," they wrote.

Related: DeepMind scientists win $3 million 'Breakthrough Prize' for AI that predicts every protein's structure

P. asymbiotica bacteria normally grow inside (opens in new tab) roundworms called nematodes and use the worms as Trojan horses to invade insect larvae. It works like this: a nematode invades the larva's body and regurgitates P. asymbiotica; the bacteria kills the insect's cells; and the nematode feasts on the dying larva's flesh. Thus, the nematodes and bacteria enjoy a beautiful symbiotic relationship.

To kill the insect cells, P. asymbiotica secretes tiny, spring-loaded syringes, scientifically known as "extracellular contractile injection systems," that carry toxic proteins inside a hollow "needle" with a spike on one end. Small "tails" extend from the base of the syringe imagine the landing gear of a space probe and these tails bind to proteins on the surface of insect cells. Once bound, the syringe stabs its needle through the cell membrane to release its cargo.

In previous studies, scientists isolated these syringes from Photorhabdus bacteria and also discovered that some could target mouse cells, not just insect cells. This raised the possibility that such syringes could be modified for use in humans.

To test whether this idea might be feasible, the team first loaded the syringe's hollow tube with proteins of their choosing. Then, they used AlphaFold to better understand how the syringes hone in on insect cells, so they could be modified to target human cells instead. They used the AI system to predict the structure of the bottom of the syringe's landing gear the part that first makes contact with the target cell surface. They then altered this structure so it would latch onto surface proteins found only on human cells.

Without AlphaFold, the researchers would have had to conduct this analysis using advanced microscopy techniques and crystallography, meaning detailed studies of the landing gear's atomic structure, Joseph Kreitz (opens in new tab), a doctoral student at the McGovern Institute for Brain Research at MIT and first author of the study, told Live Science in an email.

"This could have taken many months," Kreitz said. "With AlphaFold, we were able to obtain predicted structures of candidate tail fiber designs almost in real-time, significantly accelerating our efforts to reprogram this protein."

The researchers then used their modified syringes to tweak cells' genomes in lab dishes. Specifically, they delivered components of the powerful CRISPR-Cas9 gene editing tool into cells to cut and paste sections of DNA into their genomes. The team also used the syringes to insert tiny DNA-snipping scissors called zinc-finger deaminases into cells.

They also used the system to deliver toxic proteins into cancer cells in lab dishes. And finally, they injected the syringes into live mice and found that their cargo could only be detected in the targeted areas and did not spark a harmful immune reaction. For this last experiment, the team used AlphaFold to design their syringes to specifically target mouse cells.

These experiments demonstrate that the syringes can serve as "programmable protein delivery devices with possible applications in gene therapy, cancer therapy and biocontrol," the authors concluded. In contrast to therapies that deliver genetic instructions, like DNA or RNA, into cells, these protein-carrying syringes could provide "better control over the dose and half-life of a therapeutic inside cells," Kreitz and the study's senior author Feng Zhang (opens in new tab) told Live Science in an email.

That's because genetic instructions prompt cells to build proteins for themselves, whereas the syringes would come with a premeasured dose of protein. This precise dosing would be useful for treatments involving transcription factors, which tweak a cell's gene activity, and chemotherapy, which has toxic effects at high doses, they said.

The tiny syringes could also potentially be programmed to fight disease-causing bacteria in the body, Ericson and Pilhofer wrote. And in the future, it may be possible for scientists to connect multiple syringes to form multi-barrelled complexes. "These might enable more cargo to be delivered per target cell than with a single injection system," they suggested.

"However, we note that this system is still in its infancy; further efforts will be required to characterize the behavior of this system in vivo before it can be applied in clinical or commercial settings," Kreitz and Zhang told Live Science. The team is now studying how well the syringes diffuse through different tissues and organs, and continuing to examine how the immune system reacts to the new protein delivery system.

View post:
DeepMind's AI used to develop tiny 'syringe' for injecting gene therapy and tumor-killing drugs - Livescience.com

Read More..

Elon Musk and other tech leaders call for pause on ‘dangerous race’ to make A.I. as advanced as humans – CNBC

Sopa Images | Lightrocket | Getty Images

Elon Musk and dozens of other technology leaders have called on AI labs to pause the development of systems that can compete with human-level intelligence.

In an open letter from the Future of Life Institute, signed by Musk, Apple co-founder Steve Wozniak and 2020 presidential candidate Andrew Yang, AI labs were urged to cease training models more powerful than GPT-4, the latest version of the large language model software developed by U.S. startup OpenAI.

"Contemporary AI systems are now becoming human-competitive at general tasks,and we must ask ourselves:Shouldwe let machines flood our information channels with propaganda and untruth?" the letter read.

"Shouldwe automate away all the jobs, including the fulfilling ones?Shouldwe develop nonhuman minds that might eventually outnumber, outsmart,obsolete and replaceus?Shouldwe risk loss of control of our civilization?"

The letter added, "Such decisions must not be delegated to unelected tech leaders."

The Future of Life Institute is a nonprofit organization based in Cambridge, Massachusetts, that campaigns for the responsible and ethical development of artificial intelligence. Its founders include MITcosmologistMax TegmarkandSkypeco-founderJaan Tallinn.

The organization has previously gotten the likes of Musk and Google-owned AI lab DeepMind to promise never to develop lethal autonomous weapons systems.

The institute said it was calling on all AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."

GPT-4, which was released earlier this month, is thought to be far more advanced than its predecessor GPT-3.

"If such a pause cannot be enacted quickly, governments should step in and institute a moratorium," it added.

ChatGPT, the viral AI chatbot, has stunned researchers with its ability to produce humanlike responses to user prompts. By January, ChatGPT had amassed 100 million monthly active users only two months into its launch, making it the fastest-growing consumer application in history.

The technology is trained on huge amounts of data from the internet, and has been used to create everything from poetry in the style of William Shakespeare to drafting legal opinions on court cases.

But AI ethicists have also raised concerns with potential abuses of the technology, such as plagiarism and misinformation.

In the Future of Life Institute letter, technology leaders and academics said AI systems with human-competitive intelligences poses "profound risks to society and humanity."

"AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal," they said.

OpenAI was not immediately available for comment when contacted by CNBC.

OpenAI, which is backed by Microsoft, reportedly received a $10 billion investment from the Redmond, Washington technology giant. Microsoft has also integrated the company's GPT natural language processing technology into its Bing search engine to make it more conversational.

Google subsequently announced its own competing conversational AI product for consumers, called Google Bard.

Musk has previously said he thinks AI represents one of the "biggest risks" to civilization.

The Tesla and SpaceX CEO co-founded OpenAI in 2015 with Sam Altman and others, though he left OpenAI's board in 2018 and no longer holds a stake in the company.

He has criticized the organization a number of times recently, saying he believes it is diverging from its original purpose.

Regulators are also racing to get a handle on AI tools as the technology is advancing at a rapid pace. On Wednesday, the U.K. government published a white paper on AI, deferring to different regulators to supervise the use of AI tools in their respective sectors by applying existing laws.

WATCH: OpenAI says its GPT-4 model can beat 90% of humans on the SAT

Read this article:
Elon Musk and other tech leaders call for pause on 'dangerous race' to make A.I. as advanced as humans - CNBC

Read More..

Voice Data: New Machine Learning Smarts are Powering Fast and Feature-Rich Analysis – UC Today

In business, what is said can speak volumes.

Not just the words that are uttered: when, how and by whom are also insights of high value.

When analyzed and understood, they have the power to drive customer satisfaction levels, aid staff training and ensure legal compliance.

Indeed, the capture and retrospective curation of voice data has long been a thing, but software implementation has, to date, taken months and delivered only the most rudimentary intelligence.

However, todays smarter enterprises are now benefitting from altogether more sophisticated solutions that are fast and feature-rich.

Amazon Chime the all-in-one-place meet, chat and call platform for business has just transformed voice communications with its Amazon Chime SDK Call Analytics feature that makes it simpler for communication builders and developers to add that functionality into their app and website workloads.

Users benefit from real-time transcription, call categorization, post-call summary, speaker search, and tone-based sentiment via pre-built integrations with Amazon Transcribe and Amazon Transcribe Call Analytics, and natively through the Amazon Chime SDK voice analytics capability.

Insights can be consumed in both real-time and following completion of a call by accessing a data lake. Users can then use pre-built dashboards in Amazon QuickSight or the data visualization tool of their choice to help interpret information and implement learnings.

Voice remains a hugely important part of any organizations suite of communication channels and is capable of so much more than simply facilitating a conversation, says Sid Rao, GM of Amazon Chime SDK.

It generates valuable data which, when processed by call analytics, can contribute greatly to the effectiveness and efficiency of enterprises processes and workflows.

Machine Learning-based call analytics are particularly helpful for companies processing large volumes of call data to monitor customer satisfaction, improve staff training or stay compliant, but implementing such solutions can often take months.

The new call analytics features from Amazon Chime SDK reduces deployment time to a few days.

The insights and call recordings can be used across a variety of use cases such as financial services, insurance, mortgage advisory, expert consultation, and remote troubleshooting for products.

Customers can use the launched feature to improve customer experience, increase efficiency of experts such as wealth management advisors, and reduce compliance costs.

For example, banks can use Amazon Chime SDK call analytics to record and transcribe trader conversations for compliance purposes, generate real-time transcription, and perform speaker attribution using the speaker search feature.

Amazon Chime SDK customer IPC is a leading provider of secure, compliant communications and multi-cloud connectivity solutions for the global financial markets.

Tim Carmody, IPC CTO, said: In our industry, transcribing and recording trader calls is required for regulatory compliance. With all that recorded call data, machine learning is ideal to monitor calls for compliance and acquire better insights about the trades that are occurring.

Optional integration of Amazon Chime SDKs call analytics feature into call flows helps our customers compliance teams to securely monitor and automatically flag trades for non-compliance in real-time, as well as gather new trader insights from call data. Working with AWS, IPC was able to execute this quickly: where 12 months prior it would have taken over a week to implement a machine-learning-powered solution like this, Amazon Chime SDKs call analytics was deployed in just a couple days.

Businesses can also apply voice tone analysis to customer conversations to assess sentiment around products, services, or experiences.

The Chime SDK Insights console can manage integrations with AWS Machine Learning services such as Amazon Transcribe, Amazon Transcribe Call Analytics and Chime SDK voice insights, including speaker search and voice tone analysis.

Speaker search uses machine learning to take a 10 second voice sample from call audio and returns a set of closest matches from a database of voiceprints.

Voice tone analysis uses Machine Learning to extract sentiment from a speech signal based on a joint analysis of linguistic information (what was said) as well as tonal information (how it was said).

Real-time alerts can be triggered by events such as poor caller sentiment, or key words spoken during a call.

All in all, its a powerful tool capable of raising the value of voice data to great new heights.

Now THATS what were talking about..!

To learn more about how Amazon Chime SDK can help your business digitize and thrive, visit Amazon Chime SDK.

Read more here:
Voice Data: New Machine Learning Smarts are Powering Fast and Feature-Rich Analysis - UC Today

Read More..

Machine Learning Models Offer Effective Approach for Analyzing … – The Ritz Herald

Data breaches have become a major concern for companies in recent years, as they can result in significant financial and reputational damage. A study by IBM found that the average cost of a data breach is $3.86 million, highlighting the importance of developing effective strategies to prevent them. Dr. Aashis Luitels research provides a comprehensive approach to analyzing data breach risks using machine learning models. The study emphasizes the need to conduct a detailed analysis of publicly available data breach records to identify trends in data breach characteristics and sources of geographical heterogeneity. Dr. Luitel is a Technical Program Manager at Microsofts Cloud and Artificial Intelligence and a Cybersecurity Professorial lecturer at various US universities. He earned a Doctorate from the George Washington University.

Dr. Luitels research involves developing a series of supervised machine-learning models to predict the probability of data breach incidence, size, and timing. The methodology uses tree-based supervised machine learning methods adapted to high-dimensional sparse panel data and nonparametric and parametric survival analysis techniques. The study results indicate that the proposed modeling framework provides a promising toolbox that directly addresses the timing of repeat data breaches. Analyzing feature importance, partial dependence, and hazard ratios revealed early warning signals of data breach incidence, size, and timing for US organizations.

Dr. Luitel notes that his research has important implications for security engineers and developers of data security systems. By assessing an organizations susceptibility to data breach risks based on various contextual features, stakeholders can make informed decisions about protecting their organizations from data breaches. Moreover, the methodology proposed in the study can help organizations gain executive management support in implementing security systems, thereby minimizing a data breachs financial and reputational impact.

Dr. Luitels research is particularly timely given the recent surge in remote work due to the COVID-19 pandemic. The pandemic has led to an increase in cyber-attacks and data breaches, as many organizations have had to quickly shift to remote work without adequate security measures in place. Remote work has opened up new vulnerabilities and risks for organizations, such as unsecured Wi-Fi networks and personal devices for work purposes. As a result, it is more critical than ever to have effective strategies for preventing and managing data breaches.

In addition to the risks posed by remote work, organizations face a constantly evolving threat landscape, with cybercriminals using increasingly sophisticated techniques to breach networks and steal sensitive data. This makes it challenging for security professionals to keep up and identify potential threats before they cause damage.

Dr. Luitels research provides a promising solution to this challenge by using machine learning models to automate the process of identifying potential data breach risks. By analyzing large amounts of data, the models can detect patterns and trends that may be difficult for humans to discern. This can help organizations gain a more comprehensive understanding of their vulnerabilities and develop more effective security strategies.

Furthermore, the methodology proposed by Dr. Luitel can benefit organizations across a wide range of industries, including healthcare, finance, and retail. Healthcare, in particular, is a sector that is particularly vulnerable to data breaches due to the sensitive nature of patient information. With the increasing use of electronic health records and other digital tools, healthcare providers must ensure robust security measures to protect patient data.

In the finance industry, data breaches can have significant financial consequences, potentially damaging consumer trust and resulting in regulatory fines. By using machine learning models to predict the likelihood of a data breach and identify areas of vulnerability, financial institutions can develop more targeted security strategies and minimize the impact of any breaches that do occur.

In retail, data breaches can result in losing valuable customer data, including payment information and personal details. This can damage the retailers reputation and result in a loss of consumer trust. Using Dr. Luitels machine learning models, retailers can identify potential risks and develop more effective security measures to protect their customers data.

Dr. Luitels research offers a valuable contribution to the field of data security, providing a comprehensive and automated approach to identifying and mitigating data breach risks. With the ever-increasing importance of digital data and the rise of remote work, effective data security measures have become more critical than ever. By using machine learning models to analyze data breach risks, organizations can develop targeted security strategies that minimize the risk of data breaches and protect their reputation and bottom line.

Dr. Luitels research also highlights the importance of adopting a proactive approach to data security. Rather than waiting for a breach to occur, organizations can use machine learning models to predict potential breaches and implement strategies to prevent them. By analyzing patterns and trends in historical data breaches, organizations can identify potential vulnerabilities and take action to address them before cybercriminals exploit them. Moreover, the methodology proposed by Dr. Luitels research can help organizations comply with data protection regulations. The General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States require organizations to implement appropriate measures to protect personal data. Failure to comply with these regulations can result in significant fines and reputational damage.

By using machine learning models to analyze data breach risks, organizations can demonstrate their compliance with these regulations and ensure the protection of their customers personal data. In addition, Dr. Luitels research can aid in developing cyber insurance policies. Insurance companies can use the models to assess an organizations data breach risk and develop customized policies that provide appropriate coverage. By using the models to identify potential risks and vulnerabilities, insurance companies can develop policies that provide more comprehensive coverage, thereby reducing their financial risk.

According to security researchers, Dr. Luitels research contributes to data security. Organizations can develop effective strategies to prevent breaches and minimize their impact by using machine learning models to analyze data breach risks. With the increasing importance of digital data and the growing threat landscape, the need for robust data security measures has never been more critical. The models proposed in Dr. Luitels research provide a promising approach to addressing these challenges and safeguarding organizations sensitive data.

See original here:
Machine Learning Models Offer Effective Approach for Analyzing ... - The Ritz Herald

Read More..

Machine Learning Prediction of S&P 500 Movements using QDA in R – DataDrivenInvestor

Quadratic Discriminant Analysis is a classification method in statistics and machine learning. It is similar to Linear Discriminant Analysis (LDA), but it assumes that the classes have different covariance matrices, whereas LDA assumes that the classes have the same covariance matrix. If you want to learn more about LDA, here is my previous article where I talk about it.

In QDA, the goal is to find a quadratic decision boundary that separates the classes in a given dataset. This boundary is based on the estimated means and covariance matrices of the classes. Moreover, QDA can be used for both binary and multiclass classification problems. It is often used in situations where the classes have nonlinear boundaries or where the classes have different variances.

In R, QDA can be performed using the qda() function in the MASS package. We will use it on the SMarket data, part of the ISLR2 library. The syntax is identical to that of lda(). In the context of the Smarket data, the QDA model is being used to predict whether the stock market will go up or down (represented by the Direction variable) based on the percentage returns for the previous two days (represented by the Lag1 and Lag2 variables). The QDA model estimates the covariance matrices for the up and down classes separately and uses them to calculate the probability of each observation belonging to each class. The observation is then assigned to the class with the highest probability.

train <- (Smarket$Year < 2005)Smarket.2005 <- Smarket[!train, ]Direction.2005 <- Smarket$Direction[!train]

We first load the libraries and then split the data into test and training subsets in order to avoid overfitting the model.

Then we fit a QDA model to the training data (subset = train), using the qda function.

# OUTPUT:Prior probabilities of groups:Down Up 0.491984 0.508016

Group means:Lag1 Lag2Down 0.04279022 0.03389409Up -0.03954635 -0.03132544

We only use the Lag1 and Lag2 variables because they are the ones that seem to have the highest explicative power (we discovered it in a previous article about logistic regression: basically, they are the ones with the smallest p-value). Here is the article if you want to delve deeper into the topic:

The output contains the group means. But it does not contain the coefficients of the linear discriminants, because the QDA classifier involves a quadratic, rather than a linear, function of the predictors.

Next, we make predictions on the test data using the predict function and calculate the confusion matrix and the classification accuracy.

mean(qda.pred$class == Direction.2005)# OUTPUT:[1] 0.599

The output of the table function shows the confusion matrix, and the output of the mean function shows the classification accuracy.

Interestingly, the QDA predictions are accurate almost 60% of the time, even though the 2005 data was not used to fit the model. This level of accuracy is quite impressive for stock market data, which is known to be quite hard to model accurately. This suggests that the quadratic form assumed by QDA may capture the true relationship more accurately than the linear forms assumed by LDA and logistic regression. However, I would definitely recommend evaluating this methods performance on a larger test set before betting that this approach will consistently beat the market!

We can create a scatterplot with contours to visualize the decision boundaries for the Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) models on the Smarket data.

len1<-80; len2<-80; delta<-0.1grid.X1<-seq(from=min(Smarket$Lag1)-delta,to=max(Smarket$Lag1)+delta,length=len1)grid.X2<-seq(from=min(Smarket$Lag2)-delta,to=max(Smarket$Lag2)+delta,length=len2)dataT<-expand.grid(Lag1=grid.X1,Lag2=grid.X2)

lda.pred<-predict(lda.fit,dataT)zp <- lda.pred$posterior[,2] -lda.pred$posterior[,1]contour(grid.X1, grid.X2, matrix(zp, nrow=len1),levels=0, las=1, drawlabels=FALSE, lwd=1.5, add=T, col="violet")

qda.pred<-predict(qda.fit,dataT)zp <- qda.pred$posterior[,2] -qda.pred$posterior[,1]contour(grid.X1, grid.X2, matrix(zp, nrow=len1),levels=0, las=1, drawlabels=FALSE, lwd=1.5, add=T, col="brown")

The first two lines of code create a color indicator variable for the Direction variable based on whether it is Up or Down in the training data. The plot function is then used to create a scatterplot of the Lag2 variable against the Lag1 variable, with points colored according to the color indicator variable.

The next four lines of code define a grid of points to be used for generating the contours. The expand.grid function creates a data frame with all possible combinations of Lag1 and Lag2 values within the specified grid range.

The susequent chunck of code use the predict function to generate the predicted class probabilities for each point in the grid, for both the LDA and QDA models. The contour function is then used to create a contour plot of the decision boundaries for each model, with the levels set to 0 to show the decision boundary between the two classes. The LDA contours are colored violet, while the QDA contours are colored brown.

Thank you for reading the article. If you enjoyed it, please consider following me.

See the rest here:
Machine Learning Prediction of S&P 500 Movements using QDA in R - DataDrivenInvestor

Read More..

The benefits of ‘deep time thinking’ – BBC

(Image credit: Adam Proctor)

Extending the mind into million-year timescales can feel daunting, but as Richard Fisher discovered, there are many benefits to be found by embracing a longer view.

I

In 1788, three men set off to search a stretch of coast in eastern Scotland, looking for a very special outcrop of rocks. It would reveal that Earth was far, far older than anybody thought.

Leading the party was James Hutton, one of the first geologists. His goal was to show his peers an "unconformity" two juxtaposing rock layers, separated by a sharp line.

If you stumbled on one, you might not recognise its significance, but it proved that aeons of "deep time" had passed before humans emerged on Earth. There was no other way to explain these features.

Hutton's unconformity (yellow) at Siccar Point, which he reasoned must have taken millions of years to form (Credit: Adam Proctor)

For centuries, the Biblical account of time had been dominant in Europe. By one analysis of the generations in the Old Testament, conducted by an archbishop in 1650, the Earth must have been created in 4004BC.

Hutton, however, would transform that view.

His companions to Siccar Point, east Scotland, in 1788 were astonished. As one of them wrote afterwards: "The mind seemed to grow giddy by looking so far back into the abyss of time. And while we listened with earnestness and admiration to the philosopher who was now unfolding to us the order and series of these wonderful events, we became sensible how much further reason may sometimes go than imagination may venture to follow."

The insight would be one of geologys most transformational contributions to human thought, allowing us to "burst the limits of time", as one eminent scientist later put it. Time, according to Hutton, had "no vestige of a beginning, no prospect of an end".

Looking down at Hutton's Unconformity, on an "Anthropocene coast" (Credit: Adam Proctor)

The discovery of deep time would change how we see the world. Not only did it rewrite the Biblical account of time, it would provide the canvas for the theory of evolution. Later, it would help astronomers to show that the Earth itself was relatively young compared with the age of Universe.

For the past few years, I have been writing and researching about how to take a longer view. To help me understand the mind-expanding scope of deep time, I recently set out to make three films for the BBC about its discovery and implications starting with a trip to Hutton's unconformity.

Looking back to the past, how did Hutton's discovery change the world? How might we make sense of deep time's daunting scale in the present? And how should we think about the deep future?

In the first of the films, I traced the steps of Hutton and his companions to the unconformity at Siccar Point.

In the 18th Century, the three men used a boat to get there, but I chose to hike there with David Farrier, a professor of literature and the environment at the University of Edinburgh. As the author of the book Footprints, which is about the "future fossils" we are leaving behind in the Anthropocene, he was the ideal companion. Why? As we'd discover, this particular stretch of coastline is now notable for more than its natural features: it also hosts a nuclear power station and a carbon-intensive cement works, whose own legacies will continue long into the future.

Later, I also met musician Karine Polwart, who during the Covid-19 pandemic was inspired to record a song about Hutton's discovery at Siccar Point.

WATCH:

The man who discovered the abyss of time

In the second film, I wanted to explore how we might make sense of the awe-inspiring scale of deep time today and crucially, not just with the lens of science alone.

When I reflect on how short my own lifespan is within the million-year chronologies of the Earth, it can feel pretty daunting. From the planet's perspective, our lives are momentary flashes of light on the surface of a lake; briefly bright, but quickly gone. Thinking about deep time can therefore be a sublime experience: astonishing, but tinged with the awareness of your own mortality.

One person who has spent a career thinking about deep time is the artist Katie Paterson. Through her artworks, she makes long-term timescales more accessible, more comprehensible, more human.

In the film, I visited two of her projects: the Future Library in Norway, which contains books that can't be read until 2114, and Requiem, which tells the story of the Earth and humanity through 34 vials of dust, from pre-solar grains to a crushed tree branch from the site of the Hiroshima atomic bomb.

Paterson's work helps to make the long view of deep time a little bit easier to comprehend as well as providing clarity and urgency about the role that our generation is playing within it.

WATCH:

The art of thinking in 'deep time'

Finally, in the third film, I reflected on our personal, generational connections across long-term time: not just to the past, but the deep future too.

When I daydream about the life that could lie ahead for my daughter Grace, I realise that she stands a pretty good chance of seeing the 22nd Century. Born in 2013, she would be 86 years old when 2100 arrives. If she has grandchildren or great-grandchildren, they might even reach the next century after that.

Through our family ties, we are far closer to seemingly distant dates in time than first appears and we have a surprising amount in common with one another in terms of our ancestry too. As the film explores, you don't even need to have children to figure in this deep time narrative, and your actions today will reach far further across time than you might realise.

WATCH:

The 22nd Century people living among us

Making these films, I realised that deep time needn't be an impersonal, cold concept, and that there are benefits to be found by embracing a million-year view.

The writer John McPhee, who popularised the term in the 1980s, argued perhaps pessimistically that human beings may not be capable of grasping the concept of deep time to its full extent. "The human consciousness may have begun to leap and boil some sunny day in the Pleistocene, but the race by and large has retained the essence of its animal sense of time," he wrote in his influential book Basin and Range. "People think in five generations two ahead, two behind with heavy concentration on the one in the middle. Possibly that is tragic, and possibly there is no choice."

McPhee suggested that the units of years, the common currency of humanitys temporal understanding, become ever-less useful and tractable once time becomes very big. "Numbers do not seem to work well with regard to deep time. Any number above a couple of thousand years 50,000, 50 million will with nearly equal effect awe the imagination to the point of paralysis," he wrote.

However, while it is true that million-year chronologies may be beyond our direct sensory faculties, that doesn't mean we cannot try to extend the mind over thousands, millions or even billions of years. And there could be upsides to doing so: a deep-time view can provide the kind of perspective that we need within the upheaval of the Anthropocene.

As Paterson told me when we met in Edinburgh: "It's a mind-bending concept thinking about things that happened millions, billions of years into the past. And I can understand that some people might find that pretty difficult. Oddly, I never have. I've always just been absolutely delighted by this idea that we've got the capacity to know and understand or imagine what's come before us. I find it really inspiring and eye-opening and moving, and it gives me a kind of rootedness."

*Richard Fisher is the author of The Long View: Why We Need to Transform How the World Sees Time, and a senior journalist for BBC Future. Twitter:@rifish

The Deep Time films were filmed, edited and produced by Adam Proctorat Fortsunlight.

--

Join one million Future fans by liking us on Facebook, or follow us on Twitter or Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called "The Essential List" a handpicked selection of stories from BBC Future, Culture, Worklife, Travel and Reel delivered to your inbox every Friday.

Originally posted here:
The benefits of 'deep time thinking' - BBC

Read More..

Grammy Award-Winning Artist Chris Stapleton, Emmy Award-Winning Actor Christopher Lloyd Headline Additions to RSA Conference 2023 Keynote Speaker…

BOSTON, March 30, 2023--(BUSINESS WIRE)--RSA Conference, the worlds leading cybersecurity conferences and expositions, today announced a number of additional keynote speakers for its upcoming Conference, taking place at the Moscone Center in San Francisco from April 24-27, 2023.

Adding to the initial keynote lineup, RSA Conference welcomes Grammy Award-winning artist Chris Stapleton as a panelist on a session that explores cybersecurity and the music industry. Closing out the week at the Hugh Thompson Show: Quantum Edition, Emmy-award winning actor Christopher Lloyd, who played eccentric inventor Emmett Doc Brown in the Back to The Future trilogy, joins the stage to discuss his Hollywood blockbuster experience along with real quantum computing and cryptography experts.

Additional panelists taking part in The Hugh Thompson Show include:

Shohini Ghose, Professor of Physics and Computing, Wilfrid Laurier University

Paul Kocher, Independent Researcher and Cryptographer

A newly added panel entitled "Face the Music: Cybersecurity and the Music Industry" features:

Hany Farid, Professor, University of California, Berkeley

Katherine Forrest, Partner, Paul, Weiss, Rifkind, Wharton & Garrison LLP

Chris Stapleton, Musician, 8-time Grammy, 15-time CMA and 10-time ACM Award-winner

Herbert Stapleton, Special Agent in Charge, FBI Indianapolis (Moderator)

Additional keynote speakers and sessions at RSA Conference 2023 include:

Michael Alicea, EVP & Chief People Officer, Trellix

Vijay Bolinav, CISO, Deep Mind

Dr. Diana Burley, Vice Provost for Research, American University

General (Retired) Richard D. Clarke, U.S. Special Operations Command

John Elliott, Principal Consultant, Withoutfire and Pluralsight Author

H.E. Nathaniel Fick, Ambassador-at-Large for Cyberspace and Digital Policy, Bureau of Cyberspace and Digital Policy, U.S. Department of State

Camille Stewart Gloster, Deputy National Cyber Director for Technology and Ecosystem Security, White House Office of the National Cyber Director

H.E. Nathalie Jaarsma, Ambassador-at-Large for Security Policy and Cyber, Ministry of Foreign Affairs of the Netherlands

Laura Koetzle, Vice President & Group Director, Forrester

Juhan Lepassaar, Executive Director, EU Agency for Cybersecurity

Dr. Laurie Locascio, Director of Under Secretary of Commerce for Standards and Technology, NIST

Brad Maiorino, Chief Information Security Officer, Raytheon Technologies

Chris McCurdy, General Manager and Vice President of Worldwide IBM Security Services, IBM Security

Daniel Rohrer, VP of Software Product Security, NVIDIA

Vivian Schiller, Executive Director, Aspen Digital, Aspen Institute

Patti Titus, Chief Information Security Officer & Chief Privacy Officer, Markel Corporation

Tara Wisniewski, Executive Vice President, ISC2

The Cryptographers Panel: Dr. Whitfield Diffie, Cryptographer and Security Expert, Cryptomathic (Moderator); Clifford Cocks, Former Chief Mathematician, Government Communications Headquarters, United Kingdom; Anne Dames, Distinguished Engineer, IBM Security; Radia Perlman, Fellow, Dell Technologies; Adi Shamir, Borman Professor of Computer Science, The Weizmann Institute, Israel

Story continues

For more information about the keynote program and to stay up to date with whats happening at RSA Conference 2023 please visit our website at https://www.rsaconference.com/usa.

About RSA Conference

RSA Conference is the premier series of global events and year-round learning for the cybersecurity community. RSAC is where the security industry converges to discuss current and future concerns and have access to the experts, unbiased content and ideas that help enable individuals and companies advance their cybersecurity posture and build stronger and smarter teams. Both in-person and online, RSAC brings the cybersecurity industry together and empowers the collective "we" to stand against cyberthreats around the world. RSAC is the ultimate marketplace for the latest technologies and hands-on educational opportunities that help industry professionals discover how to make their companies more secure while showcasing the most enterprising, influential and thought-provoking thinkers and leaders in cybersecurity today. For the most up-to-date news pertaining to the cybersecurity industry visit http://www.rsaconference.com. Where the world talks security.

View source version on businesswire.com: https://www.businesswire.com/news/home/20230330005468/en/

Contacts

Ben WaringDirector, Global PR & CommunicationsRSA ConferenceRSAConf@shiftcomm.com

See the original post here:
Grammy Award-Winning Artist Chris Stapleton, Emmy Award-Winning Actor Christopher Lloyd Headline Additions to RSA Conference 2023 Keynote Speaker...

Read More..

Machine learning model helps forecasters improve confidence in storm prediction – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

When severe weather is brewing and life-threatening hazards like heavy rain, hail or tornadoes are possible, advance warning and accurate predictions are of utmost importance. Colorado State University weather researchers have given storm forecasters a powerful new tool to improve confidence in their forecasts and potentially save lives.

Over the last several years, Russ Schumacher, professor in the Department of Atmospheric Science and Colorado State Climatologist, has led a team developing a sophisticated machine learning model for advancing skillful prediction of hazardous weather across the continental United States. First trained on historical records of excessive rainfall, the model is now smart enough to make accurate predictions of events like tornadoes and hail four to eight days in advancethe crucial sweet spot for forecasters to get information out to the public so they can prepare. The model is called CSU-MLP, or Colorado State University-Machine Learning Probabilities.

Led by research scientist Aaron Hill, who has worked on refining the model for the last two-plus years, the team recently published their medium-range (four to eight days) forecasting ability in the American Meteorological Society journal Weather and Forecasting.

The researchers have now teamed with forecasters at the national Storm Prediction Center in Norman, Oklahoma, to test the model and refine it based on practical considerations from actual weather forecasters. The tool is not a stand-in for the invaluable skill of human forecasters, but rather provides an agnostic, confidence-boosting measure to help forecasters decide whether to issue public warnings about potential weather.

"Our statistical models can benefit operational forecasters as a guidance product, not as a replacement," Hill said.

Israel Jirak is science and operations officer at the Storm Prediction Center and co-author of the paper. He called the collaboration with the CSU team "a very successful research-to-operations project." CSU Ph.D. student Allie Mazurek discusses the CSU-MLP with forecaster Andrew Moore. Credit: Provided/Allie Mazurek

"They have developed probabilistic machine learning-based severe weather guidance that is statistically reliable and skillful while also being practically useful for forecasters," Jirak said. The forecasters in Oklahoma are using the CSU guidance product daily, particularly when they need to issue medium-range severe weather outlooks.

The model is trained on a very large dataset containing about nine years of detailed historical weather observations over the continental U.S. These data are combined with meteorological retrospective forecasts, which are model "re-forecasts" created from outcomes of past weather events. The CSU researchers pulled the environmental factors from those model forecasts and associated them with past events of severe weather like tornadoes and hail. The result is a model that can run in real time with current weather events and produce a probability of those types of hazards with a four- to eight-day lead time, based on current environmental factors like temperature and wind.

Ph.D. student Allie Mazurek is working on the project and is seeking to understand which atmospheric data inputs are the most important to the model's predictive capabilities. "If we can better decompose how the model is making its predictions, we can hopefully better diagnose why the model's predictions are good or bad during certain weather setups," she said.

Hill and Mazurek are working to make the model not only more accurate, but also more understandable and transparent for the forecasters using it.

For Hill, it's most gratifying to know that years of work refining the machine learning tool are now making a difference in a public, operational setting.

"I love fundamental research. I love understanding new things about our atmosphere. But having a system that is providing improved warnings and improved messaging around the threat of severe weather is extremely rewarding," Hill said.

More information: Aaron J. Hill et al, A New Paradigm for Medium-Range Severe Weather Forecasts: Probabilistic Random ForestBased Predictions, Weather and Forecasting (2022). DOI: 10.1175/WAF-D-22-0143.1

Go here to read the rest:
Machine learning model helps forecasters improve confidence in storm prediction - Phys.org

Read More..

Playbook Deep Dive: What Trump’s indictment means – POLITICO

Well, I mean, in terms of the characters, yes, you're right that this is all sort of a throwback to 2016-2018 period.

But, you know, one of the people who's testified twice, I believe, in front of this grand jury and who is central to this whole episode and who I believe has never spoken publicly about it is David Pecker. And so if there's any chance that he ends up testifying at a trial or ends up speaking about his side of the story, I would be very intrigued to hear that.

As you know, as someone who, you know, he was extremely close to Donald Trump and that's how he got involved in this hush money payment to begin with. That's someone I would really like to hear from at some point if there's an opportunity to do that.

But in terms of the sort of the legal questions that are going to come up here, there's quite a number. But I think the biggest one is, you know, I mentioned that the indictment is sealed. We don't know what the counts are yet, but there's a lot of questions about how the district attorney, Alvin Bragg, constructed these charges and whether they will survive in court, because if they are what we think they're going to be, they're a largely untested legal theory.

And Trump's lawyers, of course, will try their hardest to fight them and given that they're untested, there's just a lot of questions about how they'll survive. So that's probably the biggest issue here. But then, of course, we will run into all sorts of questions about the sort of scheduling of legal proceedings and a potential trial for someone who is a presidential candidate. And that is likely to be very, very complicated. So.

See the rest here:
Playbook Deep Dive: What Trump's indictment means - POLITICO

Read More..