Page 1,884«..1020..1,8831,8841,8851,886..1,8901,900..»

Datasets and ‘Flipped’ Research Drive Alzheimer’s Advances | UArizona Health Sciences – University of Arizona

Doctors diagnose 10 million new cases of dementia yearly, and that number is projected to triple by 2050. Dementia is most often caused by Alzheimer's disease, and research has made little progress tackling it over more than a century. Now, using cutting-edge bioinformatics and computational biology, University of Arizona Health Sciences researchers have identified promising new candidates for Alzheimer's treatment as well as new strategies for optimizing prevention.

Despite studying Alzheimer's disease since its documentation in 1906, scientists have yet only a limited understanding of it, and there are no proven preventions or treatments for it today.

Tomorrow, however, there might be.

In just five years, Rui Chang, PhD, and Francesca Vitali, PhD, researchers at the UArizona Health Sciences Center for Innovation in Brain Science, have made incredible strides identifying molecular compounds that show significant potential for helping to prevent Alzheimer's disease and even reverse its effects.

Dr. Vitali, associate director of bioinformatics at the Center for Innovation in Brain Science, focused on the preclinical phase of Alzheimer's disease: the window in which disease is progressing undiagnosed, which can span 20 years. In a novel strategy shes named Targeted Risk Alzheimer's disease Prevention (TRAP), Dr. Vitali drew on scientific papers and drug information repositories to identify FDA-approved pharmaceuticals with promise for preventing Alzheimer's disease.

First, Dr. Vitali used natural language processing to mine information from more than 10,000 published medical studies and reports. In simple terms, natural language processing enables computers to "read" vast amounts of text, finding patterns and information that is otherwise obscured by the sheer scale of content. Through that process, she identified more than 300 diseases and conditions linked to higher risk for Alzheimer's disease.

Dr. Vitali then employed data mining across drug information repositories to identify more than 600 approved medicines used to treat those conditions. Through further evaluation, she ultimately focused on 46 drugs for a system biology analysis that revealed the complex ways they work in the body unique effects as well as interconnected biological mechanisms.

"Based on these findings, we believe that early interventions that strategically target known risks for developing Alzheimer's could effectively make this a preventable disease by 2025," said Dr. Vitali, who is an assistant research professor in the College of Medicine Tucsons Department of Neurology and a member of the BIO5 Institute.

Her analysis also showed which therapeutics, alone or in combination, might work best for patients with specific genetic profiles: a platform for a precision medicine approach to preventing Alzheimer's disease. And while Dr. Vitali developed the TRAP strategy for Alzheimer's, it can also be applied to other diseases lacking prevention and cures, such as Parkinsons disease, multiple sclerosis and Lou Gehrigs disease, also known as amyotrophic lateral sclerosis or ALS.

In a similar strategy, Dr. Chang, a computational biology investigator at the Center for Innovation in Brain Science, used artificial intelligence (AI) to analyze multiomic data sets: information on DNA, proteins, the microbiome and more. The data was drawn from thousands of postmortem brain tissue samples provided by the Accelerating Medicines Partnership Program for Alzheimer's Disease, a consortium recently established by the National Institutes of Health.

The key innovation in Dr. Chang's research, however, is the network model he created for the extensive ways that genes and proteins influence one another. Dysregulation of a single gene, for example, has ripple effects throughout the body, changing disease pathways. Scientists studying brain tissues see only terminal, downstream gene expressions, with no insight into how those states came about. Because the samples are from people who had lived with Alzheimer's disease for varying lengths of time, collectively they represent rich timelines of disease progression.

"My network model is able to pinpoint the upstream causes of disease pathology," explained Dr. Chang, who is an associate professor of neurology in the College of Medicine Tucson. "Im able to show exactly which gene in the upstream became dysregulated, the network changes it caused and therefore what could be a remedy what gene or genes to perturb to return the whole network to the healthy state."

The analyses identified 6,000 potential targets and more than 3,000 potential compounds for treatment. Those were narrowed to 170 compounds that protect neurons from death or activate the brain's innate defense system to consume amyloid plaques and neurofibril tangles, which have been the focus of Alzheimer's disease research for the past two decades.

Ultimately, Dr. Chang's study converged on three treatments that significantly improved the working memory of mice with Alzheimer's impairments. Two of the compounds are substances naturally produced in all mammals and one is derived from plants. All reduce the brain plaques and protein tangles that have been the focus of Alzheimer's disease research for the past two decades.

But while other compounds might also reduce plaques and tangles (the landmark study in this line of inquiry was recently called into question), they have never been shown to improve cognitive deficits. In contrast, Dr. Chang's treated mice improved so dramatically their brain function nearly "caught up" with the control group of mice with no disease.

Natural language processing, AI and network analyses can accelerate discovery by overcoming the natural biases and blind spots intrinsic to human analysis, revealing connections that are often only logical in hindsight.

All three compounds are now on track for clinical trials, and because neurodegenerative diseases seem to have overlapping mechanistics, Dr. Chang believes they could also lead to treatments for other diseases, including Parkinson's and Lewy body dementia. He also believes his methodology could lead to cures for comparably complex challenges like cancer, now recognized as 100-plus similar diseases.

Typically, therapeutics begin with discoveries in labs. Pharmaceutical scientists learn that a compound has a biological effect, then try to match that effect to the known causes and symptoms of health conditions.

Researchers like Drs. Chang and Vitali are flipping that model, seeking treatments and cures by first considering known factors underlying diseases. They then use technologies to scan profusions of data for patterns that suggest certain compounds may impact those biological underpinnings.

The approach offers advantages evidenced in their successes: the sheer size of the data sets would take decades to process without these technologies and working with so much data has the added advantage of establishing greater confidence in findings. The flipped model also accelerates discovery: Dr. Chang's findings were derived in just five years, Dr. Vitali's in two.

Perhaps most importantly, natural language processing, AI and network analyses can accelerate discovery by overcoming the natural biases and blind spots intrinsic to human analysis, revealing connections that are often only logical in hindsight. The 6,000 initial targets surfaced by Dr. Chang, for example, certainly included many that hadn't been considered relevant to Alzheimer's disease.

"Today, there are two camps in medical research," Dr. Chang said. "One is traditional biology, the other is AI and data science, but it's critical that these two camps are collaborating, not competing."

Data scientists can come up with hypotheses, but biologists are essential in validating those hypotheses, Dr. Chang said. And even the most advanced computers are no substitute for insight, intuition and imagination. Biologists bring and inspire new ideas, and no level of computing power today can replace that.

"It all boils down to the fact that AI is data-driven, and biologists are knowledge-driven," Dr. Chang said. "Our research works best when we bring those two approaches together. We need each other, and I hope to see even more collaboration going forward."

Read more here:

Datasets and 'Flipped' Research Drive Alzheimer's Advances | UArizona Health Sciences - University of Arizona

Read More..

Solana looks bullish at the moment but is it right time to go long – AMBCrypto News

Solana [SOL] kicked off this week in a bearish tone after peaking at $48.38 on 13 August. Fast forward to the present and it is about to fall below $40 if the bears continue their assault.

However, the altcoin just retested a short-term resistance line, which might yield some upside.

SOL has been trading within its current ascending channel since mid-June. It is underscored by support and resistance, and the price just retested the former in the last 24 hours.

The alt also tested the same support line at least four times in June, and twice in July.

It further bounced off just above the same support earlier this month and interacted with the same support a few hours prior to press time.

SOL has already demonstrated a significant drop in selling pressure after interacting with the support line near the $40.20 price level.

This outcome confirms that there is significant buying pressure coming in.

The bearish performance this week was enough to push SOLs Relative Strength Index (RSI) below its 14-day SMA and its 50% level. However, a slight pivot to the upside confirms the return of bullish pressure.

The drop in sell pressure at the support level is already a healthy sign favoring the upside. However, it does not guarantee such an outcome.

The crypto markets volatile and unpredictable nature might turn out in favor of the bears. But, some on-chain metrics were leaning in favor of the bulls at press time.

SOLs social dominance increased slightly in the last 24 hours. Hence, the cryptocurrency gained some visibility during this time.

Its Binance funding rate also dropped sharply during this period. Even so, it has also registered a strong bounce back during the same period.

The Binance funding rates bounce back confirms a sentiment change, which aligns with the resistance retest.

SOLs recovery can be credited to positive news regarding Solend, its native lending, and borrowing DeFi protocol.

Solend recently announced the launch of its permissionless pools.

This development may fuel more demand for SOL, especially in the mid to long-term. Perhaps it might also excite investors in the short term, thus supporting a healthy recovery.

Read the original post:
Solana looks bullish at the moment but is it right time to go long - AMBCrypto News

Read More..

Startup detects COVID-19 using spit, light, and a computer built to analyze patterns – GeekWire

Pattern Computers ProSpectral device for detecting COVID-19. (Pattern Computer Photo)

A Seattle-area startup calledPattern Computeris developing a rapid COVID-19 test based on patterns in light from spit, one of several projects moving ahead from the 7-year-old company that designed its own computer from scratch.

The companys Pattern Discovery Engine was created specifically to discover and analyze patterns and excels at the task, said CEO and co-founderMark Anderson.

Pattern Computer keeps the workings of its system closely guarded, and has not published its AI models in peer-reviewed journals. Outside researchers say its hard to know whats under the hood.

But its approach has attracted seasoned computer science talent and biotech heavyweights to the startup.

The companys chief technology officer, co-founder Ty Carlson, previously managed the Amazon team that launched products such as the Amazon Echo. Its advisory board includes Leroy Hood, a co-founder of the Institute for Systems Biology, genome pioneer Craig Venter, and serial biotech startup founder George Church, a Harvard professor.

Anderson is founder and CEO of technology newsletterStrategic News Service, read by Bill Gates, Jeff Bezos, Michael Dell and Elon Musk, he said. Hes known as a tech prognosticator and each year brings together an eclectic mix of tech leaders and scientists at his Future in Review conference (he also co-founded the Whale Museum in Friday Harbor, Wash., where he lives).

At his 2015 conference, Anderson hosted a chief technology officer challenge, where participants designed a desktop supercomputer. The resulting system formed the seed for Pattern Computer. Participants included the other company co-founders, entrepreneurBrad Holtz and Michael Riddle, who previously co-founded Autodesk, makers of AutoCAD and other industrial software.

Pattern Computer focuses on biomedicine. But it also addresses problems in materials design, veterinary medicine, finance, mathematics, and aerospace with its partners, such as an analysis of ways to reduce flight delays.

Pattern Computer has raised $26 million to date, all from individual investors, including Venter and Ken Goldman, former chief financial officer of Yahoo and Patterns consulting CFO. The startup is now seeking to raise $40 million with a valuation of $1.2 billion, said Anderson.

Its a company that is taking an intriguing new approach to manipulate and analyze big data, said Hood. And when the pandemic hit, it thought deeply about the COVID problem, said Hood.

Gearing up a spit test

Pattern Computer takes a unique approach to COVID-19 testing. The company analyzes patterns of light that pass through and are absorbed by spit.

The test takes only two drops of saliva and reads off a result from the companys ProSpectral device within three seconds. The device harnesses an approach called hyperspectral sensing, which involves the analysis of light across all spectrums.

Instead of measuring the virus directly, the test captures the jumbled immune and metabolic response to disease. Theres a fingerprint for that in light, said Anderson.

Company researchers trained and assessed their model using spit from 470 samples roughly equally divided between those that were COVID-19 positive and negative on a PCR test, a conventional way to detect the disease.

Patterns test could detect 100% of people with the disease, with 8% of individuals without the disease showing a false-positive result, Andersonreportedat the Life Sciences Innovation Northwest meeting this April. Shifting the tests parameters enabled some COVID-19 cases to slip through undetected but yielded fewer false-positive results.

The test is also inexpensive: the in-house cost of running it is about 50 cents.

The testing approach is very smart, said Taran Gujral, a systems biologist and associate professor at Fred Hutchinson Cancer Center. Gujral, who does not have a financial or collaboration relationship with the company, said the method also holds promise for detecting other diseases, potentially enabling rapid testing in airports, hospitals, and in the field.

We think it will change diagnostics, said Anderson.

Keeping company secrets

Other outside researchers said they need more information to assess the companys approach to COVID-19 testing.

The company does not divulge whether it captures data using a standard type of spectrophotometer that measures light in biological samples, or another instrument. They are not sharing any information on how the signal is generated, said Dan Fu, an assistant professor of chemistry at the University of Washington.

University of Washington microbiology professor Evgeni Sokurenko, who is developing a rapid test for COVID-19 variants as co-founder of ID Genomics, said its important to look closely at Pattern Computers data in particular, the PCR data it used to train and test its models.

PCR tests work by replicating DNA derived from the virus through multiple cycles to detect a signal. Different labs use different cycle numbers for COVID-19 testing, he said. Higher cycle numbers detect lower levels of virus.

The majority of Pattern Computers COVID-19 positive samples had a cycle threshold below 30, said company researcher Matt Keener whereas the typical threshold is set higher, said Sokurenko (at 35-40 cycles).

That raises the possibility that the companys models may not be geared to pick up low levels of the virus, and could therefore miss some asymptomatic infections, said Sokurenko.

Keener countered that the companys data are consistent across all PCR thresholds. The results dont show any statistical sensitivity to the PCR value, said Keener. Our accuracy holds no matter what the PCR value for an individual test sample is. In addition, the accuracy of the companys test held true whether the samples came from asymptomatic or symptomatic individuals, he said.

The U.S. Food and Drug Administration will be the final judge of the companys COVID-19 test.

Pattern Computer has applied to the agency for emergency use authorization. Its also identified four other countries for potential launch and has arranged for partners to help it scale up and produce the test.

Were looking forward to being able to discuss more once we are comfortably down the road to regulatory approvals and such, said Keener.

Pattern Computers other bioscience projects include mining databases on gene activity in cancer cells to identify potential treatments, based on drugs already approved for other diseases though its hard to tell how the companys approach to this and other data-mining problems compares to others, said Gujral, who does similar research.

The company has identified two drug combinations that kill breast cancercell lines in culture, and is moving them through animal testing for hard-to-treat triple-negative tumors. It also is investigating treatments forovarian cancerand other tumor types.

Speaking at an investor presentation earlier this year, Omid Moghadam, CEO of diagnostics startup Namida Lab, said Patterns discovery engine substantially increased the predictive accuracy of an experimental test for breast cancer based on tear samples. Moghadam is a Pattern Computer customer and advisor.

And while Pattern has not published its bioscience projects in peer-reviewed journals, their first priority has been to get everything going, which theyve really had to put an enormous amount of time into, said Hood. I suspect they will be publishing comparative papers in the future.

The team has been refining its system and mathematical tools over the last several years, with a lean crew of 21 employees. Weve been very heads down, said Anderson.

Next generation computing

Pattern Computer is following the path of other groups, from academic labs to tech companies like Alphabet, that are changing how computers are constructed and programmed. The advent of artificial intelligence is spurring a surge of innovation.

AI models need to process vast numbers of calculations simultaneously. And current computing architecture designs are becoming a bottleneck for computing speed, infrastructure cost, and power consumption, said University of Chicago assistant professor of molecular engineering Sihong Wang.

People working on the hardware side have started to develop a completely different type of computing platform that processes information by emulating the operation of neurons in the brain, said Wang, who recently developed a flexible computing chip for wearable health tech, and is not familiar with Pattern Computers system.

Anderson said Pattern Computers approach is unique. The company created an AI system that is distinct from the neural network approached leveraged by others, he said. This is qualitatively very different from where someone has a neural network and theyre pushing it and modifying it, he said.

Pattern Computers explainable AI enables it to counteract bias that can be baked into more conventional machine learning models by skewed training datasets, said Anderson.

It allows us to see how and why the system was successful in getting high prediction rates, he said. Knowing how and why the system works provides the type of knowledge required to make major pattern discoveries, improve research, and solve real business problems.

Building that new way to make sense of patterns is a challenging problem, said Neeraj Kumar, a senior data scientist at Pacific Northwest National Laboratory.

In a recent preprint with outside researchers, company researchers published their view of how explainable AI could be applied to health data.

The publication does not specify how the companys system works, said Vijay Janapa Reddi, a Harvard associate professor who directs the universitys Edge Computing Lab. It is hard to glean much about the startup from the preprint, said Reddi, who was not familiar with the company.

But Kumar has seen enough to be convinced.

Pattern Computers computational approach is very robust, said Kumar, an author on the paper. And it is the first step for developing an explainable AI by extracting novel patterns in complex data that cannot be discovered using conventional analytical techniques and algorithms, he said.

Meanwhile, the company is turning its attention to securing regulatory approval for its COVID-19 test and planning for scale up.

Weve created a different kind of company, said Anderson.Weve done it in a different way.

Continue reading here:

Startup detects COVID-19 using spit, light, and a computer built to analyze patterns - GeekWire

Read More..

Ithaka Announces $2.5 Million Investment to Open Annotation Provider Hypothesis & Say Hello to Anno – LJ INFOdocket

From a Letter Posted by Kevin Guthrie, Ithaka President (via Ithaka.org):

I am excited to share today that we have invested $2.5 million in Anno, the public-benefit corporation that is home to the nonprofit Hypothesis.

As you may know, Hypothesis is a tool that enables people to annotate documents and webpages. Its free browser extension is in use by a million people globally, with a version that integrates with learning management systems now installed at 200 colleges and universities. We see tremendous potential for tools like Hypothesis that are open and interoperable to improve teaching and learning.

In addition to the investment, we are working on a pilot project with Anno to enable the use of Hypothesis with the text-based materials on JSTOR through learning management systems. As an organization with a mission to expand access to knowledge and education, ITHAKAs investment and this collaboration will support the use and study of the materials you and we have worked so hard to produce, preserve and make accessible. I encourage you to read our public announcement as well as Annos blog post for more details.

As you can tell, we are excited about our relationship with Anno. Their purpose is to build new open, interoperable infrastructure connecting the worlds people and ideas over all content on every platform using a new unit of speechthe digital annotationto enable a world of diverse collaborative services for the benefit of humanity. At a time when learning and understanding from our past, present, and futureand from one anotheris so desperately needed, we are eager to play a role in bringing this vision to life.

In the coming weeks, well be sharing more about the promise of social annotation and what we are learning through the JSTOR-Hypothesis pilot. Keep an eye out in our upcoming newsletters for a short video showing how the integration will work, including some back-end authentication magic within the learning management system that obviates the need for content to be downloaded locally, ensuring that use of those materials happens on the JSTOR platforman approach we know is important to both publishers and librarians.

[Clip]

Kevin GuthriePresident, ITHAKA

From a Hypothesis Blog Post by Dan Whaley, Founder/CEO:

In 2019, we and others formed Invest In Open Infrastructure (IOI), an initiative to dramatically increase the amount of funding available to open scholarly infrastructure. We recruited Kaitlin Thaney to that effort, and she has been doing a terrific job laying the foundation for this.

But all this would take time we didnt have.

In response, and to better position us to achieve our long-held mission, weve formed Anno, a public benefit corporation (formally Annotation Unlimited, PBC) that shares the Hypothesis mission as well as its team. Weve done this so that we can take investment in a mission aligned way and scale the Hypothesis service to meet the opportunity in front of us.

Anno is funded by a $14M seed round that includes a $2.5M investment from ITHAKA, the nonprofit provider of JSTOR, a digital library that serves more than 13,000 education institutions around the world, providing access to more than 12 million journal articles, books, images and primary sources in 75 disciplines. Also participating in the round are At.inc, Triage Ventures, Esther Dyson, Mark Pincus and others. ITHAKAs president, Kevin Guthrie, has joined Annos board as an observer.

[Clip]

Anno will help us scale Hypothesis in the higher education and research markets. We believe there is no better way to bring annotation to the larger world than through institutions of learning and the students, faculty, scientists and scholars that rely on us. The nonprofit Hypothesis Project will focus on advocacy, standards and the development of the larger paradigm of open annotation beyond our implementation.

While our organizational structure might be evolving, our approach remains the same. Were still the same Hypothesis: Well still develop open source software based on open standards, well still champion the same principles we were founded on, and well still speak up when we see things that just arent right. Importantly, a Hypothesis account will remain free for individual users.

For our institutional customers in higher education, nothing will change not the product, not the support you receive and not the way you do business with us.

Read the Complete Blog Post (view Additional Interview with Esther Dyson)

Filed under: Companies (Publishers/Vendors), Digital Collections, Funding, Interactive Tools, Interviews, Journal Articles, Libraries, Management and Leadership, News, Patrons and Users, Profiles

See the original post here:

Ithaka Announces $2.5 Million Investment to Open Annotation Provider Hypothesis & Say Hello to Anno - LJ INFOdocket

Read More..

Prediction of mortality risk of health checkup participants using machine learning-based models: the J-SHC study | Scientific Reports – Nature.com

Participants

This study was conducted as part of the ongoing Study on the Design of a Comprehensive Medical System for Chronic Kidney Disease (CKD) Based on Individual Risk Assessment by Specific Health Examination (J-SHC Study). A specific health checkup is conducted annually for all residents aged 4074years, covered by the National Health Insurance in Japan. In this study, a baseline survey was conducted in 685,889 people (42.7% males, age 4074years) who participated in specific health checkups from 2008 to 2014 in eight regions (Yamagata, Fukushima, Niigata, Ibaraki, Toyonaka, Fukuoka, Miyazaki, and Okinawa prefectures). The details of this study have been described elsewhere11. Of the 685,889 baseline participants, 169,910 were excluded from the study because baseline data on lifestyle information or blood tests were not available. In addition, 399,230 participants with a survival follow-up of fewer than 5years from the baseline survey were excluded. Therefore, 116,749 patients (42.4% men) with a known 5-year survival or mortality status were included in this study.

This study was conducted in accordance with the Declaration of Helsinki guidelines. This study was approved by the Ethics Committee of Yamagata University (Approval No. 2008103). All data were anonymized before analysis; therefore, the ethics committee of Yamagata University waived the need for informed consent from study participants.

For the validation of a predictive model, the most desirable way is a prospective study on unknown data. In this study, the data on health checkup dates were available. Therefore, we divided the total data into training and test datasets to build and test predictive models based on health checkup dates. The training dataset consisted of 85,361 participants who participated in the study in 2008. The test dataset consisted of 31,388 participants who participated in this study from 2009 to 2014. These datasets were temporally separated, and there were no overlapping participants. This method would evaluate the model in a manner similar to a prospective study and has an advantage that can demonstrate temporal generalizability. Clipping was performed for 0.01% outliers for preprocessing, and normalization was performed.

Information on 38 variables was obtained during the baseline survey of the health checkups. When there were highly correlated variables (correlation coefficient greater than 0.75), only one of these variables was included in the analysis. High correlations were found between body weight, abdominal circumference, body mass index, hemoglobin A1c (HbA1c), fasting blood sugar, and AST and alanine aminotransferase (ALT) levels. We then used body weight, HbA1c level, and AST level as explanatory variables. Finally, we used the following 34 variables to build the prediction models: age, sex, height, weight, systolic blood pressure, diastolic blood pressure, urine glucose, urine protein, urine occult blood, uric acid, triglycerides, high-density lipoprotein cholesterol (HDL-C), LDL-C, AST, -glutamyl transpeptidase (GTP), estimated glomerular filtration rate (eGFR), HbA1c, smoking, alcohol consumption, medication (for hypertension, diabetes, and dyslipidemia), history of stroke, heart disease, and renal failure, weight gain (more than 10kg since age 20), exercise (more than 30min per session, more than 2days per week), walking (more than 1h per day), walking speed, eating speed, supper 2h before bedtime, skipping breakfast, late-night snacks, and sleep status.

The values of each item in the training data set for the alive/dead groups were compared using the chi-square test, Student t-test, and MannWhitney U test, and significant differences (P<0.05) were marked with an asterisk (*) (Supplementary Tables S1 and S2).

We used two machine learning-based methods (gradient boosting decision tree [XGBoost], neural network) and one conventional method (logistic regression) to build the prediction models. All the models were built using Python 3.7. We used the XGBoost library for GBDT, TensorFlow for neural network, and Scikit-learn for logistic regression.

The data obtained in this study contained missing values. XGBoost can be trained to predict even with missing values because of its nature; however, neural network and logistic regression cannot be trained to predict with missing values. Therefore, we complemented the missing values using the k-nearest neighbor method (k=5), and the test data were complemented using an imputer trained using only the training data.

The parameters required for each model were determined for the training data using the RandomizedSearchCV class of the Scikit-learn library and repeating fivefold cross-validation 5000 times.

The performance of each prediction model was evaluated by predicting the test dataset, drawing a ROC curve, and using the AUC. In addition, the accuracy, precision, recall, F1 scores (the harmonic mean of precision and recall), and confusion matrix were calculated for each model. To assess the importance of explanatory variables for the predictive models, we used SHAP and obtained SHAP values that express the influence of each explanatory variable on the output of the model4,12. The workflow diagram of this study is shown in Fig.5.

Workflow diagram of development and performance evaluation of predictive models.

More:

Prediction of mortality risk of health checkup participants using machine learning-based models: the J-SHC study | Scientific Reports - Nature.com

Read More..

IMLS: $5.2 Million Awarded to Strengthen Library Services for Tribal Communities, Native Hawaiians – LJ INFOdocket

From the Institute of Museum and Library Services:

The Institute of Museum and Library Services today announced grants totaling $5,253,000 through three programs designed to support and improve library services of Native American, Native Alaskan, and Native Hawaiian organizations.

With these awards, IMLS recognizes the importance of supporting libraries and cultural centers in First Nations and Tribal communities, said IMLS Director Crosby Kemper. The importance of cultural learning is essential in all communities, but it is critical to embrace and honor the precious and unique heritage of Native communities. These Native American and Native Hawaiian grants expand and enhance literacy programs, language preservation, community storytelling, and digital access.

Native American Library Services Basic Grants support existing library operations and maintain core library services. These non-competitive grants are awarded in equal amounts among eligible applicants. Grants totaling $1,297,411 were awarded to 117 Indian Tribes, Alaska Native villages, and other regional and village corporations.

Native American Library Services Enhancement Grants assist Native American Tribes in improving core library services for their communities. Enhancement Grants are only awarded to applicants that have applied for a Native American Library Services Basic Grant in the same fiscal year.

IMLS received 27 applications requesting $3,470,682 and was able to award $3,096,553 to 23 Tribes in 11 states. This years awarded grants will advance the preservation and revitalization of language and culture, as well as educational programming and digital services.

Native Hawaiian Library Services Grants are available to nonprofit organizations that primarily serve and represent Native Hawaiians so they can enhance existing or implement new library services. IMLS received eight applications requesting $1,187,718 and awarded $859,036 to six organizations serving Native Hawaiians.

Some examples of awarded projects include:

For more information about upcoming grant opportunities, please visit the IMLS website.

Source

Filed under: Associations and Organizations, Awards, Digital Collections, Digital Preservation, Funding, Interviews, Libraries, News, Open Access, Preservation

Read the rest here:

IMLS: $5.2 Million Awarded to Strengthen Library Services for Tribal Communities, Native Hawaiians - LJ INFOdocket

Read More..

News from the world of Education: August 19, 2022 – The Hindu

Conference on Smart Computing and Information Security

Marwadi University (MU) will host an International Conference on Advancements in Smart Computing and Information Security (ASCIS) from November 25 to 27. The conference aims to bring together academicians, researchers, and industry practitioners of intelligent computing and information security ASCIS 2022 has also issued a call for research papers under different tracks including Artificial Intelligence, Data Science, Smart Computing, Cyber Security and Industry. The deadline to submit is August 31, 2022. For details visit https://ascisconf.org/ or email: registrations@ascisconf.org

Career Development Conclave

iSchoolConnect will host a Career Development Conclave on How technology is shaping career choices of students. This will be held virtually on August 27 and is meant for heads of institutions, heads of placements, training and placement officers, industry-relationship officers, and coordinators of higher-ed institutions. TO register visit https://bit.ly/3Axl6ON

MoUs and partnerships

Edverse recently signed an MoU with LM Thapar School of Management, Mohali, to enable students get fully immersed in a digital classroom environment and interact with the education experts.

Sattvik Council of India, in collaboration with IRCTC, recently launched SEED 2.0, a virtual internship programme designed to train and educate students on the concepts of Vegetarian Friendly Tourism and Vegetarian Friendly Railway Services (VFRS).

The Office of Learning Support, Ashoka University, recently organised a summit on inclusion of students with disabilities in higher education in partnership with Changeinkk, I-Stem and Inclusive University Alliance.

ConveGenius Insights recently conducted a state-wide assessment for the Himachal Pradesh Samagra Shiksha department in partnership with Michael and Susan Dell Foundation, to develop a remedial learning strategy after understanding the post-COVID student learning outcomes. Students of classes 4, 6 and 9 were assessed for Maths and Language abilities.

Executive Programme in Global Business Management

IIM Calcutta recently launched the 15th batch of the one-year Executive Programme in Global Business Management with Emeritus, to help learners develop future-ready business skills. For details visit https://bit.ly/3PywOx0

Events and launches

University of Essex recently announced Essex Preparation Programme (EPP), a special and free six-week online course to help incoming students become university-ready. It is offered at no cost to students, and open to anyone applying for undergraduate study at the institution. If they complete the course and register to study at Essex for the 2022-23 academic year, they will qualify for a 250 financial assistance.

Aakash BYJUS has launched Education For All, a nation-wide project to offer free NEET and JEE coaching to students from underprivileged families, especially girls. To identify the beneficiary students, Aakash will partner with select NGOs, who can nominate students.

JAIN Online recently launched #JAINOnlineCan, a brand campaign to educate, empower, and encourage informed decision-making among aspirants, when it comes to choosing their programmes.

New Zealand Institute of Skills and Technology announced its new international education strategy during an event held at the New Zealand High Commission, New Delhi. Participants included Dr. Leon de W Fourie, the Chief Executive of Te Pkenga International, and David Pine, New Zealands High Commissioner to India.

UPES Hackathon 3.0, a 24-hour coding marathaon held by UPES, Dehra Dun, concluded recently. The problem statements from industry and public sector revolved around themes such as Business Intelligence, Data Mining, Artificial Intelligence, Intelligent Transportation, Computational Optimisation, Cyber Security, Environment and Ecology, Healthcare, EdTech, and so on. Over 200 teams from India and the SAARC Nations participated and 28 made it to the finale.

Universal Business School recently organised HR and ESG Symposium 2022, which was attended by many senior Human Resource officers, CEOs and CXOs from companies such as Cadbury, TATA Consultancy Services, BNP Paribas, Deloitte, and more. The institute also welcomed a new batch of 375 students for its UG and PG programmes and hosted an induction ceremony for them.

EduBridge launched two new offerings for learners. Learn and Earn is a payment offering where a learner avails the interest-free, payment plan for upto 15 months. The next is the Secure Your Salary with Digit Groups Total Protect policy offering for learners in financial need. For details, visit, http://www.edubridgeindia.com

ARCH College of Design and Business recently organised a fashion show during the Global Educators Fest 2022 in which principals from several institutions walked the ramp. Under the Design Culture Initiative, ARCH partnered with Global Educators Fest 2022 and provided another platform to its students to showcase the design talent and creativity. The college, in association with Women Mentor Forum, also celebrated National Handloom Day 2022.

IMS Ghaziabad recently organised a talk on Marketing Management by Mohan Lal Agarwal, President, Indo-Gulf Management Association, Dubai for Term I students of PGDM Batch 2022-24. This was part of the Global Talk Series.

Celebrating Independence Day

The Academy School hosted an Independent Country, Independent Me initiative in which students pasted stickers of the national flag on homes around the school and conducted a survey on how people perceive India.

Students from multiple branches of Orchids - The International School, celebrated 75th Independence Day by hoisting the national flag and performing to some patriotic songs.

The Canadian International Schools celebration included flag hoisting and cultural performances. Students and teachers performed to A.R. Rahmans rendition of Vande Mataram.

K.N. Nehru, Tamil Nadu Minister for Municipal Administration, Urban and Water Supply hoisted a national flag measuring 75x50 feet, hand painted by 75 students of the Fashion Technology Department of Sona College of Technology and Textile Technology Department of Thiagarajar Polytechnic, Salem.

The NSS unit of Saveetha School of Management (SIMATS) recently conducted a competition to mark Independence Day. Debate and essay-writing in English and Tamil, painting, and a quiz on the theme of Independence were conducted.

IIM Bangalore recently hosted several special programmes and activities to mark 75 years of Indian Independence. The discussions were themed, India: Pioneering Past and Bright Future.

Winners of SmartIDEAthon 2022 Challenge

The winners of the SmartIDEAthon 2022 Challenge conducted at Gitam University, Visakhapatnam were announced recently. Animesh Kumar and Hrithik Jaiswal, students from Netaji Subhash University of Technology, Hajipur, Bihar, came first for the idea of IoT-based solutions for early detection and monitoring of diabetes to prevent foot ulcers. The runners-up were Karthickjothi M and Mothish M, students, Madras Christian College, for their idea to use Assistive Technology and people with speech and hearing impairments break the communication and social barrier. Amirthalakshmi K and Carolin Mary X from Tamil Nadus Saranathan College of Engineering won a prize under the Best Woman-led Entrepreneurship Idea, for working on providing a smart wheelchair to help people with mobility disabilities caused due to Chronic Obstructive Pulmonary Disease (COPD). The Leben Johnson Peoples Choice Award went to Pravin Kumar and Nabeel of Rajalakshmi Engineering College, Tamil Nadu, for their solution enabling hands-free control of smart devices for amputees, people with neurological disorders, and people with hand fractures.

Awards

Dr. T. Jayanthi, a student of Diplomate of National Board (DNB) - Ophthalmology of Dr. Agarwals Eye Hospital, Chennai, was recently awarded the Gold Medal by the National Board of Examinations (NBE) in Medical Sciences, in her specialty, at the 21st convocation ceremony of NBE in Medical Sciences, New Delhi.

Kush Malpani of Cathedral and John Connon School recently won the Gold at the International Economics Olympiad 2022.

Convocations

O.P. Jindal Global University recently held its convocation and awarded UG, PG and doctoral degrees to 3,100 students from various schools and institutes. Justice D.Y. Chandrachud of the Supreme Court of India delivered the Convocation Address. The Rt Hon Patricia Scotland QC, Secretary-General, The Commonwealth also spoke at the event. The university also bestowed the Lifetime Achievement Award and the Global Justice Medal on them.

Dharmendra Pradhan, Union Minister of Education and Skill Development and Entrepreneurship, Government of India, awarded Blockchain-based Digital Degrees to 1,555 Students of NIT Rourkela during its 19th Convocation. These certificates will now be accessible from anywhere in the world since it is encrypted and stored digitally.

Corporate and cultural immersion programme for students

MQDC India recently organised a five-day immersion programme for Bennett University students to provide a hands-on experience and hone industry-relevant skills in an academically-stimulating environment.

Monsoon Maladies campaign

SRCC Children's Hospital, managed by Narayana Health, Mumbai, recently launched a campaign to create awareness among students about Monsoon Maladies: Seasonal water and vector-borne diseases.

First cohort of UN India YuWaah Advocates appointed

UN agencies in India marked International Youth Day by appointing the first cohort of UN India YuWaah Advocates and making a pact for youth centrality. Six young individuals from diverse backgrounds and geographies have been chosen to inspire action for SDGs across high-level decision-making platforms. In turn, they will receive access to information, skilling, and mentorship support, and will work in partnership with external experts across UN agencies and partner networks, celebrity advocates, ambassadors, and digital influencers to amplify and foreground young peoples priorities and perspectives towards shaping the 2030 Agenda.

McAfee Cyberbullying Report 2022

A global report titled Cyberbullying in Plain Sight, by McAfee, recently uncovered several new and consequential trends regarding cyberbullying including the types of bullying being reported, data around perpetrators and victims of online bullying, and the tensions between how parents and children define cyberbullying activity. Some findings included: extreme forms of cyberbullying reported besides racism (42%) include trolling (36%), personal attacks (29%), sexual harassment (30%), threat of personal harm (28%) and doxing (23%). India reported prominent acts of cyberbullying such as spreading false rumours at 39%, being excluded from groups and conversations at 35%, and name calling at 34% and 45% of Indian children said they hide their cyberbullying experiences from parents, perhaps due to the relative absence of conversation.

Read the original here:

News from the world of Education: August 19, 2022 - The Hindu

Read More..

Faculty Position, Computer and Network Engineering job with UNITED ARAB EMIRATES UNIVERSITY | 305734 – Times Higher Education

Job Description

The College of Information Technology (CIT) has engaged in an ambitious reorganization effort aiming to harness the prevalence of computing and the rise of artificial intelligence to advance science and technology innovation, for the benefit of society. Under its new structure, the College will serve as the nexus of computing and informatics at the United Arab Emirates University (UAEU). CIT will build on the strength of its current research programs to create new multidisciplinary research initiatives and partnerships, across and beyond the university campus, critical to its long-term stability and growth. CIT will also expand its education portfolio with new multidisciplinary degree programs, including a BSc. in Artificial Intelligence, a BSc. in a Data Science, a BSc. in Computational Linguistics, jointly with the College of Humanities and Social Sciences, a MSc. in IoT, and a Ph.D. in Informatics and Computing. Also planned is a suite of online Microcredentials in emerging fields of study, including IoT, Cybercrime Law and Digital Forensics, Blockchains, and Cloud Computing.

About the Position:

We seek faculty candidates with strong research record in all areas of Artificial Intelligence and Data Science, with a special emphasis on emerging areas of Artificial Intelligence and Machine Learning and on the theoretical foundations and applications of Data Sciences and AI/ML in a wide range of fields and domain applications, including Smart IoT, Smart Environments and Autonomous and Intelligent Systems. The successful candidates are expected to complement and enhance the current strength of the departments in AI and Data Science related areas, including Deep Learning, Natural Language Processing, Big Data, and Data Mining, and to contribute to the teaching and research in these areas.

Candidate Qualifications:

Candidates must hold a Ph.D. degree in computer science, information science or closely related areas from a recognized university.

Preferred qualifications include:

Faculty rank is commensurate with qualifications and experience. The positions will remain open until filled. The UAEU and CIT are committed to fostering a diverse, inclusive, and equitable environment and culture for students, staff, and faculty.

Application Instructions:

Applications must be submitted online at https://jobs.uaeu.ac.ae/search.jsp (postings under CIT). The instructions to complete an application are available on the website.

A completed application must include:

About the UAEU:

The United Arab Emirates University (UAEU) is the first academic institution in the United Arab Emirates. Founded by the late Sheikh Zayed bin Sultan Al Nahyan in 1976, UAEU is committed to innovation and excellence in research and education. As the countrys flagship university, UAEU aims to create and disseminate fundamental knowledge through cutting-edge research in areas of strategic significance to the nation and the region, promote the spirit of discovery and entrepreneurship, and educate indigenous leaders of the highest caliber.

Minimum Qualification

Candidates must hold a Ph.D. degree in computer science, information science or closely related areas from a recognized university.

Preferred qualifications include:

Preferred Qualification

Strong research record in all areas of Artificial Intelligence and Data Science, with a special emphasis on emerging areas of Artificial Intelligence and Machine Learning and on the theoretical foundations and applications of Data Sciences and AI/ML in a wide range of fields and domain applications, including Smart IoT, Smart Environments and Autonomous and Intelligent Systems. The successful candidates are expected to complement and enhance the current strength of the departments in AI and Data Science related areas, including Deep Learning, Natural Language Processing, Big Data, and Data Mining, and to contribute to the teaching and research in these areas.

Expected Skills/Rank/Experience

Faculty rank is commensurate with qualifications and experience. The positions will remain open until filled.

Special Instructions to Applicant

The review process will continue until the position is filled. A completed application must be submitted electronically at: https://jobs.uaeu.ac.ae/

Division College of Information Tech. -(CIT)Department Computer &Network Engineering-(CIT)Job Close Date open until filledJob Category Academic - Faculty

Read more here:

Faculty Position, Computer and Network Engineering job with UNITED ARAB EMIRATES UNIVERSITY | 305734 - Times Higher Education

Read More..

CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis | Scientific Reports – Nature.com

Figure 1, illustrates the proposed method, which is generally divided into two segments. On the left, we take a feature fusion-based approach, emphasizing signal processing on the acquired dataset by denoising it with a band pass filter and extracting alpha, beta, and theta bands for further processing. Numerous features have been extracted from the extracted bands. Feature extraction methods include the Fast Fourier Transform, Discrete Cosine Transform, Poincare, Power Spectral Density, Hjorth parameters, and some statistical features. The Chi-square and Recursive Feature Elimination procedures were used to choose the discriminative features among them. Finally, we utilized classification methods such as Support Vector Machine and Extreme Gradient Boosting to classify all the dimensions of emotion and obtain accuracy scores. On the other hand, we take a spectrogram image-based 2DCNN-XGBoost fusion approach, where we utilize a bandpass filter to denoise the data in the region of interest for different cognitive states. Following that, we performed the Short-Time Fourier Transform and obtained spectrogram images. To train the model on the retrieved images, we use a two-dimensional Convolutional Neural Network (CNN) and a dense layer of neural network to obtain the retrieved features from CNNs trained layer. After that, we utilized Extreme Gradient Boosting to classify all of the dimensions of emotion based on the retrieved features. Finally, we compared the outcomes from both approaches.

An overview of the proposed method.

In the proposed method (i.e., Fig. 1), we have used the DREAMER3 dataset. Audio and video stimuli were used to develop the emotional responses of the participants in this dataset. This dataset consists of 18 stimuli tested on participants, and Gabert-Quillen et al.16 selected and analyzed them to induce emotional sensation. The clips came from several films showing a wide variety of feelings. Two of each film centered on one emotion: amusement, excitement, happiness, calm, anger, disgust, fear, sad, and surprise. All of the clips are between 65 and 393 seconds long, giving users plenty of time to convey their feelings17,18. However, just the last 60 s of the video recordings were considered for the next steps of the study. The clips were shown to the participants on a 45-inch television monitor with an attached speaker so that they could hear the soundtrack and put them to the test. The EEG signals were captured with the EMOTIV EPOC, a 16-channel wireless headset. Data from sixteen distinct places were acquired using these channels. The wireless SHIMMER ECG sensor provided additional data. This study, however, focused solely on EEG signals from the DREAMER dataset.

Initially, the data collection was performed for 25 participants, but due to some technical problems, data collection from 2 of them was incomplete. As a result, the data from 23 participants were included in the final dataset. The dataset consists of signals from trail and pre-trail. Both were collected as a baseline for each stimuli test. The data dimension of EEG signals from the DREAMER dataset is shown in Table 2.

EEG signals usually have a lot of noise in them. As a result, the great majority of ocular artifacts occur below 4 Hz, muscular motions occur above 30 Hz, and power line noise occurs between 50 and 60 Hz3. For a better analysis, the noise must be decreased or eliminated. Additionally, to work on a specific area, we must concentrate on the frequency range that provides us with the stimuli-induced signals. The information linked to the emotion recognition task is included in a frequency band ranging from 4 to 30 Hz3. We utilized band pass filtering to acquire sample values ranging from 4-30 Hz to remove the noise from the signals and discover the band of interest.

The band-pass filter is a technique or procedure that accepts frequencies within a specified range of frequency bands while rejecting any frequencies above the frequency of interest. The bandpass filter is a technique that uses a combination of a low pass and a high pass filter to eliminate frequencies that arent required. The fundamental goal of such a filter is to limit the signals bandwidth, allowing us to acquire the signal we need from the frequency range we require while also reducing unwanted noise by blocking frequency regions we wont be using anyway. In both sections of our proposed method, we used a band-pass filter. On the feature fusion-based approach, we used this filtering technique to filter the frequency band between 4 and 30 Hz, which contains the crucial information we require. This helps in the elimination of unwanted noises. Weve decided to divide the signals of interest into three more bands: theta, alpha, and beta. These bands were chosen because they are the most commonly used bands for EEG signal analysis. The defining of band borders is somewhat subjective. The ranges that we use in our case are theta ranging between 4 and 8Hz, alpha ranging between 8 and 13 Hz, and beta ranging between 13 and 20 Hz. For the 2DCNN-XGBoost fusion-based approach, using this filter technique, we filtered the frequency range between 4 and 30 Hz, which contains relevant signals and generated spectrum images. Here the spectrograms from the signals were extracted using STFT and transformed into RGB pictures.

After pre-processing, we have used several feature extraction techniques for our feature fusion-based and the 2DCNN-XGBoost fusion-based approach that we discussed below:

Fast Fourier transform is among the most useful methods for processing various signals19,20,21,22,23. We used the FFT algorithm to calculate a sequence of Discrete Fourier Transform. The FFT stems are evaluated because they operate in the frequency domain, in the time or space, equally computer-feasible. The O(NlogN) result can also be determined by the FFT. Where N is the length of the vector. It functions by splitting a N time-domain signal into a N time domain with one single stage. The second stage is an estimate of the N frequency range for the N time-domain signals. Lastly, the N spectrum was synthesized into one frequency continuum to speed up the Convolutional Neural Network training phase.

The equations of FFT are shown below (1), (2):

$$begin{aligned} H(p)= & {} sum _{t=0}^{N-1} r(t) W_{N}^{p n}, end{aligned}$$

(1)

$$begin{aligned} r(t)= & {} frac{1}{N} sum _{p=0}^{N-1} H(p) W_{N}^{-p n}. end{aligned}$$

(2)

Here (H_p) represents the Fourier co efficients of r(t)

(a) A baseline EEG signal in time domain, (b) A baseline EEG signal in frequency domain using FFT, (c) A stimuli EEG signal in time domain, (d) A stimuli EEG signal in frequency domain using FFT .

We have implemented this FFT to get the coefficients shown in Fig. 2. The mean and maximum features for each band were then computed. Therefore, we get 6 features for each channel across 3 bands, for a total of 84 features distributed across 14 channels.

This method exhibits a finite set of data points for all cosine functions at varying frequencies which is used in research24,25,26,27,28. The Discrete Cosine Transformation (DCT) is usually applied to the coefficients of a periodically and symmetrically extended sequence in the Fourier Series. In signal processing, DCT is the most commonly used transformation method (TDAC).The imaginary part of the signal is zero in the time domain and in the frequency domain. The actual part of the spectrum is symmetrical, the imaginary part is unusual. With the following Eq. (3) , we can transform normal frequencies to the mel frequency:

$$begin{aligned} X_{P}=sum _{n=0}^{N-1} x_{n} cos {left[ frac{pi }{N}left( n+frac{1}{2}right) Pright] }, end{aligned}$$

(3)

where, N is the the list of real numbers and (X_p) is the set of N data values

(a) A baseline EEG signal in time domain, (b) A baseline EEG signal in frequency domain using DCT, (c) A stimuli EEG signal in time domain, (d) A stimuli EEG signal in frequency domain using DCT.

We have implemented DCT to get the coefficients shown in Fig. 3. The mean and maximum features for each band were then computed. Therefore, we get 6 features for each channel across 3 bands, for a total of 84 features distributed across 14 channels.

The Hjorth parameter is one of the ways in which a signals statistical property is indicated in the time domain and has three parameters which are Activity, Mobility, and Complexity. These parameters were calculated in many research29,30,31,32.

Activity: The parameter describes the power of the signal and, the variance of a time function. This can suggest the power spectrum surface within the frequency domain. The notation for activity is given below (4),

$$begin{aligned} var(y(t)). end{aligned}$$

(4)

Mobility: The parameter represents the average frequency or the share of the natural variation of the spectrum. This is defined as the square root of the variance of the first y(t) signal derivative, which is divided by the y(t). The notation for activity is given below (5),

$$begin{aligned} sqrt{frac{var(y'(t))}{var(y(t))}}. end{aligned}$$

(5)

Complexity: The parameter reflects the frequency shift. The parameter contrasts the signal resemblance with a pure sinusoidal wave, where the value converges to 1 if the signal is more identical. The notation for activity is given below (6),

$$begin{aligned} frac{mobility(y'(t))}{mobility(y(t))}. end{aligned}$$

(6)

For our analysis, we calculated Hjorths activity, mobility, and complexity parameters as features. Therefore, we get 9 features for each channel across 3 bands, for a total of 126 features distributed across 14 channels.

Statistics is the application of applied or scientific data processing using mathematics. We use statistical features to work on information-based data, focusing on the mathematical results of this information. We can learn and gain more and more detailed information on how statistics arrange our data in particular and how other data science methods can be optimally used to achieve more accurate and structural solutions. There is multiple research33,34,35 on emotion analysis where statistical features were used. The statistical features that we have extracted are median, mean, max, skewness and variance. As a result, we get 5 features for each channel, for a total of 70 features distributed across 14 channels.

The Poincare, which takes a series of intervals and plots each interval against the following interval, is an emerging analysis technique. In clinical settings, the geometry of this plot has been shown to differentiate between healthy and unhealthy subjects. It is also used in a time series for visualizing and quantifying the association between two consecutive data points. Since long-term correlation and memory are demonstrated in the dynamics of variations in physiological rhythms, this analysis was meant to expand the plot of Poincare by steps, instead of between two consecutive points, the association between sequential data points in a time sequence. We used two parameters in our paper which are:

SD1: Represent standard deviation from axis 1 of the distances of points and defines the width from the ellipse (short-term variability). Descriptors SD1 can be defined as (7):

$$begin{aligned} SD1 = frac{sqrt{2}}{2}SD(P_n - P_{n+1}). end{aligned}$$

(7)

SD2: The standard deviations from axis 2 and ellipse length (long-term variability) are equivalent to SD2.Descriptors SD2 can be defined as (8):

$$begin{aligned} SD2 = sqrt{2SD(P_n)^2 - frac{1}{2}SD(P_n - P_{n+1})^2}. end{aligned}$$

(8)

We have extracted 2 features which are SD1 and SD2 from each band (theta, alpha, beta). Therefore, we get 6 features for each channel across 3 bands, for a total of 84 features distributed across 14 channels.

The Welch method is a modified segmentation system and is used to assess the average periodogram, which is used in papers3,23,36. The Welch method is applied to a time series. For spectral density, it is concerned with decreasing the variance in the results. Power Spectral Density (PSD) informs us which differences in frequency ranges are high and could be very helpful for further study.The Welch method of the PSD can usually be described by the following equations: (9), (10) of the power spectra.

$$begin{aligned} P(f)= & {} frac{1}{M U}left| sum _{n=0}^{M-1} x_{i}(n) w(n) e^{-j 2 pi f}right| ^{2}, end{aligned}$$

(9)

$$begin{aligned} P_{text{ welch } }(f)= & {} frac{1}{L} sum _{i=0}^{L-1} P(f). end{aligned}$$

(10)

Here, the equation of density is defined first. Then, Welch Power Spectrum implies that for each interval, the average time is expressed. We have implemented this Welch method to get the PSD of the signal. From that, the mean power has been extracted from each band. As a result, we get 3 features for each channel across 3 bands, for a total of 42 features distributed across 14 channels.

A Convolutional Neural Network (CNN) is primarily used to process images since the time series is converted into a time-frequency diagram using a Short-Time Fourier Transform (STFT). It extracts required information from input images using multilayer convolution and pooling, and then classifies the image using fully connected layers. We have calculated the STFT using the filtered signal, which ranges between 4 and 30 Hz, and transformed them into RGB images. Some of the generated images are shown in Fig. 4.

EEG signal spectrograms using STFT with classification (a) high arousal, high valence, and low dominance, (b) low arousal, high valence, and high dominance, (c) high arousal, low valence, and low dominance.

To convert time series EEG signals into picture representations, Wavelet algorithms and Fourier Transforms are commonly utilized, which we have used in our secondary process. But in order to preserve the integrity of the original data, EEG conversion should be done solely in the time-frequency domain. As a result, STFT is the best method for preserving the EEG signals most complete anesthetic characteristics, which we have used in our second process. The spectrograms from the signal were extracted using STFT and the Eq. (11) is given below:

$$begin{aligned} Z_{n}^ {e^{(j hat{omega )}}}=e^{-jhat{omega }n}[(W(n)e^{jhat{omega }n}) times x(n)], end{aligned}$$

(11)

where, (e^{-jhat{omega }n}) is the complex bandpass filter output modulated by signal. From the above equation we have calculated the STFT from the filtered signals.

For our feature fusion-based approach, as we have pre-trail signals, we have used 4 s of pre-trail signals as baseline signals, resulting in 512 samples for each at a 128 Hz sampling rate. Then similar to the features extracted for stimuli, the features from baseline signals were also extracted. Then the stimuli features were divided by the baseline features, in order to get only the differences which can be noticed for the feature fusion-based approach by the stimuli test only, which is also done in the paper3.

After extracting all the features and calculating the ratio between stimuli features and baseline features, we have added the self-assessment ratings of arousal valence and dominance. Now the data set for the feature fusion-based approach has 414 data points with 630 features for each data point. We scaled the data using MinMax Scaling to remove the large variation in our data set. The estimator in MinMax Scaling scales and translates each value individually so that it is between 0 and 1, within the defined range.

The formula for MinMax scale is (12),

$$begin{aligned} X_{n e w}=frac{X_{i}-{text {Min}}(X)}{{text {Max}}(X)-{text {Min}}(X)}. end{aligned}$$

(12)

There are various feature selection techniques which are used by many researchers, to reduce the number of features which are not needed and only the important features which can play a big role in the prediction. So in our paper we used two feature elimination methods. One is Recursive Feature Elimination (i.e., Fig. 5) and another one is Chi-square test (i.e., Fig. 6) .

Procedure of recursive feature elimination (RFE).

Procedure of feature selection using Chi-square.

RFE (i.e., Fig. 5) is a wrapper type feature selection technique amongst the vast span of features. Here the term recursive is representative of the loop work of this method that traverses backward on loops to identify the best fitted feature giving each predictor an importance score and later eliminating the least importance scored predictor. Additionally cross-validation is used to find the optimal number of features to rank various feature subcategories and pick the best selection of features for scoring. In this method one attribute is taken and along with the target attribute and this procedure keeps forwarding combining attributes and merging with the target attribute to produce a new model. Thus different subsets of features of different combinations generate models through training. All these models are then strained out to get the maximum accuracy resulting model and its consecutive features. In short, we remove those features which result in the accuracy to be high or at least equal and return it back if the accuracy gets low after elimination . Here we have used step size of 1 to eliminate one feature at a time at each level which can help to remove the worst features early, keeping the best features in order to improve the already calculated accuracy of the overall model.

Chi-square (i.e., Fig. 6) test is a filter method that states the accuracy of a system comparing the predicted data with the observed data based on their importance. It is a test that figures out if there is any feature effective on nominal categorized data or not in order to compare between observed and expected data. In this method one predicted data set is considered as a base point and expected data is calculated from the observed value with respect to the base point.

The Chi-square value is computed by (13):

$$begin{aligned} chi ^{2}=sum _{i=1}^{m} sum _{j=1}^{k} frac{left( A_{i j}-frac{R_{i} cdot C_{j}}{N}right) ^{2}}{frac{R_{i} cdot C_{j}}{N}}, end{aligned}$$

(13)

where, m is the number of intervals, k is the amount of classes, (R_i) is the amount of patterns in the i range, (C_j) is the amount of patterns in the j range, (A_{ij}) is the amount of patterns in i and j range.

After applying RFE and Chi-square , from the achieved accuracy we have observed that, Chi-square does not incorporate a machine learning (ML) model, while RFE uses a machine learning model and trains it to decide whether it is relevant or not. Moreover, in our research, Chi-square methods failed to choose the best subset of features which can provide better results,but because of the extensive nature, RFE methods give the best subset of features mostly in our research. Therefore we consider RFE over Chi-square for feature elimination.

In research3, on this data set, they have calculated the mean and standard deviation for the self assessment. Then they have divided each dimension into two classes, high or low. The boundary between high and low was in the mid point of (0-*5) which is 2.5. But we have adjusted this boundary on our secondary process based on some of our observation. We have also calculated the mean and standard deviation of self assessment ratings, shown in Table 3, to separate each dimension of emotions into two separate classes, which will be high (1) and low (0) and will be representing two emotional category for each dimension.

Arousal: For our 2DCNN-XGBoost fusion based approach, (ratings (> 2.5)) is considered in the class of Excited/Alert and (ratings(< 2.5)) is considered as Uninterested/Bored (0). Here, from the 5796 data, 4200 was in the excited/alert class (1) and 1596 was in the uninterested/bored class. For the feature fusion-based approach, We have focused on the average ratings for excitement which co-responds to stimuli number 5 and 16, having 3.70 0.70 and 3.35 1.07 respectively. Additionally for, calmness, we can take stimuli 1 and 11 into consideration where the average ratings are, 2.26 0.75 and 1.96 0.82 respectively. Therefore, (ratings (> 2)) can be considered in the class of Excited/Alert and (ratings(< 2)) can be considered as Uninterested/Bored. Here, from the 414 data, 393 was in the excited/alert class and 21 was in the uninterested/bored class. We have also shown the parallel Coordinate plot for arousal in Fig. 8a to show the impact of different features on arousal level.

Valence: For our 2DCNN-XGBoost fusion based approach, (ratings (> 2.5)) is considered in the class of happy/elated and (ratings(< 2.5)) is considered as unpleasant/stressed. Here, from the 5796 data, 2254 was in the unpleasant/stressed class and 3542 was in happy/elated class. To store this values in the new data set, unpleasant/stressed is considered as 0 and happy/elated is considered as 1. For the feature fusion-based approach, firstly, we concentrated on the average happiness ratings, which correspond to stimuli 7 and 13, having 4.52 0.59 and 4.39 0.66 respectively. Additionally, stimuli (4, 15) and (6, 10) for fear and disgust were considered where the average ratings are, 2.04 1.02, 2.48 0.85, 2.70 1.55 and 2.17 1.15 respectively. Here, it is clear that, ratings (> 4) can be considered in the class of happy/elated and ratings(< 4) can be considered as unpleasant/stressed. Here, from the 414 data, 359 was in the unpleasant/stressed class and 55 was in happy/elated class. To store this values in the new data set, unpleasant/stressed is considered as 0 and happy/elated is considered as 1. We have also shown the parallel Coordinate plot for valence in Fig. 8b to show the impact of different features on valence level.

Dominance: For our 2DCNN-XGBoost fusion based approach, Same approach is followed here with low and high classes. Here, ratings(> 2.5) in the class of helpless/without control and ratings(< 2.5) can be considered for the class of empowered. Here, from the 5796 data, 1330 was in the helpless/without control class and 4466 was in empowered class. To store this values in the new data set, helpless/Without Control is considered as 0 and empowered is considered as 1. For the feature fusion-based approach, we have targeted stimuli number 4,6 and 8 which has targeted emotions of fear, disgust and anger, having mean rating of 4.13 0.87, 4.04 0.98 and 4.35 0.65 respectively. So, ratings(> 4) in the class of helpless/without control and rest for the class of empowered. Here, from the 414 data, 65 was in the helpless/without control class and 349 was in empowered class. To store this values in the new data set, helpless/Without Control is considered as 0 and empowered is considered as 1. We have also shown the parallel Coordinate plot for dominance in Fig. 8c to show the impact of different features on dominance level.

The overall class distribution for arousal, valence and dominance is shown in the Fig. 7.

Overall class distribution after conversion to a two-class rating score for arousal, valence and dominance.

Impact factor of features on (a) arousal, (b) valence and (c) dominance using parallel co-ordinate plot.

Convolutional Neural Network (CNN) is a type of deep neural network used to analyze visual imagery in deep learning. Figure 9, represents the overall two-dimensional Convolutional Neural Network model used in our proposed method (i.e., Fig. 1), which is also our 2DCNN-XGBoost fusion approach. We generated spectrum images before using this CNN architecture by filtering the frequency band containing significant signals between 4 and 30 Hz. Following that, we compute the Short-Time Fourier Transform of the EEG signals and convert them to spectrogram images before extracting features with a 2D Convolutional Neural Network. We train the model with 2D convolutional layers using the obtained spectrogram images, and then retrieve the trained features from the training layer with the help of another dense layer. We have implemented the test-bed to evaluate the performance of our proposed method. The proposed model is trained using the Convolutional Neural Network (CNN) described below,

The architecture of the implemented CNN model.

Basic features such as horizontal and diagonal edges are usually extracted by the first layer. This information is passed on to the next layer, which is responsible for detecting more complicated characteristics such as corners and combinational edges. As we progress deeper into the network, it becomes capable of recognizing ever more complex features such as objects, faces, and so on.The classification layer generates a series of confidence ratings (numbers between 0 and 1) on the final convolution layer, indicating how likely the image is to belong to a class. In our proposed method, we have used three layers of Conv2D and identified the classes.

The pooling layer is in charge of shrinking the convolved features spatial size. By lowering the size, the computer power required to process the data is reduced. Pooling can be divided into two types: average pooling and max pooling. We have used max pooling because it gives a better result than average pooling. We found the maximum value of a pixel from a region of the image covered by the kernel using max pooling. It removes all noisy activations and conducts de-noising as well as dimensionality reduction. In general, any pooling function can be represented by the following formula (14):

$$begin{aligned} q_{j}^{(l+1)} = Pool(q_{1}^{(l)}, ldots ,q_{i}^ {(l)},ldots ,q_{n}^{(l)}),q_{i}in R_{j}^{(l)}, end{aligned}$$

(14)

where, (R_{j}^{(l)}) is the jth pooled region at layer l and Pool() is pooling function over the pooled region

We added a dropout layer after the pooling layer to reduce overfitting. The accuracy will continuously improve as the dropout rate decreases, while the loss rate decreases. Some of the max pooling is randomly picked outputs and completely ignored. They arent transferred to the following layer.

After a set of 2D convolutions, its always necessary to perform a flatten operation.Flattening is the process of turning data into a one-dimensional array for further processing. To make a single lengthy feature vector, we flatten the output of the convolutional layers. Its also linked to the overall classification scheme.

Dense gives the neural network a completely linked layer. All of the preceding layers outputs are fed to all of its neurons, with each neuron delivering one output to the following layer.

In our proposed method, with this CNN architecture, diverse kernels are employed in the convolution layer to extract high-level features, resulting in different feature maps. At the end of the CNN model, there is a fully connected layer. The predicted class labels of emotions are generated by the output of the fully connected layer. According to our proposed method, we have added dense layer with 630 units after training layer to extracted this amount of features.

Extreme Gradient Boosting (XGBoost) is a machine learning algorithm that use a supervised learning strategy to accurately predict an objective variable by combining the predictions of several weaker models. It is a common data mining tool with good speed and performance. The XGBoost model computes 10 times faster than the Random Forest model.The XGBoost model was generated utilising the additive tree method, which involves adding a new tree to each step to complement the trees that have already been built.As additional trees are built, the accuracy generally improves. In our proposed model, we have used XGBoost after applying CNN. We extracted some amount of features from CNNs trained layer. . Then, based on the retrieved features, we used Extreme Gradient Boosting to classify all of the dimensions of emotion. The following Eqs. (15) and (16) are used in Extreme Gradient Boosting.

$$begin{aligned}{}&f(m) approx f(k)+f^{prime }(k)(m-a)+frac{1}{2} f^{n}(k)(m-k)^{2}, end{aligned}$$

(15)

$${ mathcal {L}^{(t)} simeq sum _{i=1}^{n}left[ lleft( q_{i}, q^{(t-1)}right) +r_{i} f_{t}left( m_{i}right) +frac{1}{2} s_{i} f_{t}^{2}left( m_{i}right) right] +Omega left( f_{t}right) +C },$$

(16)

where C is Constant, (r_i) and (s_i) are defined as,

$$begin{aligned} r_{i}= & partial hat{z}_{i}^{(b-1)}. int left( z_{i,} hat{z}_{i}^{(b-1)}right) , end{aligned}$$

(17)

$$begin{aligned} s_{i}= & {} partial hat{z}_{i}^{(b-1)} .int left( z_{i}, hat{z}_{i}^{(b-1)}right) . end{aligned}$$

(18)

After removing all the constants, the specific objective at step b becomes,

$$begin{aligned} sum _{i=1}^{n}left[ { r_{i}f_{t} }left( m_{i}right) +frac{1}{2}{s_{i} {f}_{t}^{2}(m_{i})}right] +Omega left( f_{t}right) , end{aligned}$$

(19)

Go here to see the original:

CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis | Scientific Reports - Nature.com

Read More..

Gulf region flips bullish on crypto mining, but can it be green? – Al-Monitor

Crypto mining is an electricity-intensive process that requires running computer servers to solve a complex set of algorithms. In other words, mining crypto converts electricity into digital coins then sold at market value. For that reason, access to cheap power is a trump card. The energy-rich Gulf region is a suitable candidate. It is home to some of the worlds largest fossil fuel resources and boasts the world's lowest solar tariffs.

After a decade of hesitation, Gulf states have started to warm up to cryptocurrencies. The United Arab Emirates (UAE) and Bahrain, in particular, are looking to attract centralized crypto exchanges they processed more than $14 trillion worth of crypto assets in 2021 and their interest in mining crypto is rising. There is a push from the UAE government to make greater use of power generation capacities, said Abdulla Al Ameri, an Emirati crypto mining entrepreneur who has been mining for about five years, including in Kazakhstan and Russia. I expect the UAE crypto mining market to take off in the next two years, he told Al-Monitor. The question is, how green will this be?

Simultaneously, Gulf states have warmed up to renewables, solar in particular, opening the doors for solar-powered crypto mining. We are working on a hybrid crypto farm in Abu Dhabi powered by solar at day, grid at night, CryptoMiners CEO Nasser El Agha told Al-Monitor. The Dubai-headquartered crypto mining service provider cooperates with an undisclosed British company to launch the Gulfs first company-scale solar-crypto farm by December 2022. It is a proof of concept intended to be ultimately taken to the market, specifically to agricultural farms wishing to generate extra income through crypto mining.

Original post:

Gulf region flips bullish on crypto mining, but can it be green? - Al-Monitor

Read More..