Category Archives: Data Mining
HotSpot Therapeutics to Present Preclinical Data from CBL-B Program at 2023 Society for Immunotherapy of Cancer Annual Meeting – Yahoo Finance
BOSTON, Sept. 27, 2023 /PRNewswire/ -- HotSpot Therapeutics, Inc., a biotechnology company pioneering the discovery and development of oral, small molecule allosteric therapies targeting regulatory sites on proteins referred to as "natural hotspots," today announced it will present additional preclinical data from the Company's CBL-B program in a poster presentation at the 2023 Society for Immunotherapy of Cancer (SITC) Annual Meeting, taking place November 1-5, 2023, in San Diego, CA.
(PRNewsfoto/HotSpot Therapeutics)
Presentation details are as follows:
Title:Exploring Proximal Biomarkers of CBL-B Inhibition in Human Peripheral Blood Mononuclear CellsSession Date and Time: Fri., Nov. 3, 9:00 AM-7:00 PM PTLocation:Exhibit Halls A and B1, San Diego Convention Center
Abstract Number: 55
About HST-1011HST-1011 is an investigational orally bioavailable, selective, small molecule allosteric inhibitor of CBL-B, an E3 ubiquitin protein ligase critically involved in immune cell response. Because CBL-B functions as a master regulator of effector cell (T cell and natural killer cell) immunity, its inactivation removes its endogenous negative regulatory functions to substantially enhance anti-tumor immunity. Preclinical data has demonstrated HST-1011's ability to bind to and inhibit a natural hotspot on CBL-B, yielding the activation and propagation of a targeted anti-tumor immune response. Enabled by HotSpot's proprietary Smart Allostery platform, HST-1011 is designed with tight binding, low nanomolar potency, a slow dissociation rate from the target to enable sustained pharmacology, and greater selectivity for CBL-B relative to C-CBL.
About HotSpot Therapeutics, Inc.HotSpot Therapeutics, Inc. is pioneering a new class of allosteric drugs that target certain naturally occurring pockets on proteins called "natural hotspots." These pockets are decisive in controlling a protein's cellular function and have significant potential for new drug discovery by enabling the systematic design of potent and selective small molecules with novel pharmacology. The Company's proprietary Smart Allostery platform combines computational approaches and AI-driven data mining of large and diverse data sets to uncover hotspots with tailored pharmacology toolkits and bespoke chemistry to drive the rapid discovery of novel hotspot-targeted small molecules. Leveraging this approach, HotSpot is building a broad pipeline of novel allosteric therapies for the treatment of cancer and autoimmune diseases. To learn more, visitwww.hotspotthera.com.
Story continues
Investor & Media Contact:Natalie Wildenradtnwildenradt@hotspotthera.com
Cision
View original content to download multimedia:https://www.prnewswire.com/news-releases/hotspot-therapeutics-to-present-preclinical-data-from-cbl-b-program-at-2023-society-for-immunotherapy-of-cancer-annual-meeting-301939983.html
SOURCE HotSpot Therapeutics
More here:
‘A New Era’: 10Pearls Chats About the Data Science Revolution – Built In Chicago
Steve Jobs saw it coming.
I think the biggest innovations of the 21st century will be at the intersection of biology and technology, he said. A new era is beginning.
Data scientists are catalyzing the change by interpreting expansive data sets from wearables and other devices to gain clear understanding and actionable insights.
In healthcare, this emerging field and its use of artificial intelligence can do more than improve outcomes for patients. It also has the power to speed drug development, lower healthcare costs and minimize errors.
Data science is playing a critical role in the rapid rise of healthtech, and those with a strong background in IT, data mining, health information management, statistics or programming have the opportunity to be a part of the data science revolution.
Built In Chicago recently sat down with Jared Bowen, a manager of technology and data at 10Pearlswho is standing at the fore of this emerging era. He talks about the pros of working in the field, how to keep pace with changing legislation and the future of healthcare.
Jared Bowen Manager
Technology and Data
10Pearls is a consulting firm that helps clients digitize, innovate and execute.
What are some of the unique challenges to working in data science in healthtech?
Be nimble and creative with problem-solving. Even though healthcare has a reputation for moving slowly with technology, the industry is constantly changing based on legislation and other outside factors. Data scientists tend to have more autonomy to solve a problem. They aren't as reliant as others on enterprise-wide technology decisions, so they can often use their favorite tools and a full set of historical data to look ahead and solve a new problem.
Even though healthcare has a reputation for moving slowly with technology, the industry is constantly changing based on legislation and other outside factors.
What are some of the most rewarding aspects of working in data science for a healthtech company?
You know you are making a difference. You get to see your impact whether its helping a patient find a specialist quickly, providing a nurse with a user experience to navigate through their routines faster or guiding a patient to a lower-cost drug alternative efficiently.
What misconceptions do you think industry outsiders might have about working in healthtech?
I only started to go deep in healthcare over the last two years. Prior to that, I was industry agnostic and focused more on being a technical expert. A strong technical foundation on how to solve problems, identify patterns and communicate complex ideas will carry you forward in healthcare.
Read more:
'A New Era': 10Pearls Chats About the Data Science Revolution - Built In Chicago
Establishment and validation of a prognosis nomogram for MIMIC-III … – BMC Gastroenterology
A total of 620 patients were enrolled in the study. According to the 7:3 random allocation, the training and validation cohorts consisted of 434 and 186 patients, respectively. All baseline characteristics of the training and validation cohorts are shown in Table 1. The median age of patients was 54.72 years in the training cohort and 54.79 years in the validation cohort. Most patients in the training and validation cohorts were male (63.8% and 65.6%, respectively). The 90-day survival rate for the training cohort was 53.69%, and the 90-day survival rate for the validation cohort was 56.45%. Baseline information on survivors and deceased patients in the training and validation cohorts are shown in Tables 2 and 3, respectively. Table 2 shows the factors that showed significant differences between groups of survivors and deaths in the training cohort, including (p<0.05): age, MAP, mean respiratory rate, mean SpO2, mean temperature, cardiac arrhythmias, lactate, albumin, anion gap, total bilirubin, chloride, creatinine, magnesium, potassium, sodium, urea nitrogen, INR, PT, PTT, RDW, WBC, albumin use, furosemide use, PAD, SOFA, MELD, and urine output. Table 3 shows the factors that showed significant differences between groups of survivors and deaths in the validation cohort, including (p<0.05): MAP, mean SpO2, mean temperature, cardiac arrhythmias, congestive heart failure, ALT, albumin, AST, total bilirubin, creatinine, magnesium, potassium, sodium, urea nitrogen, INR, PT, PTT, RDW, WBC, albumin use, PAD, SOFA, MELD, and urine output.
Univariate Cox regression analysis was performed on all baseline data factors initially included in the training cohort, and the results showed 28 potential predictors for 90-day survival, just as age, mean heart rate, MAP, mean temperature, mean SpO2, mean respiratory rate, cardiac arrhythmias, SOFAMELD, lactate, urine output, albumin, total bilirubin, urea nitrogen, sodium, potassium, magnesium, chloride, INR, RDW, WBC, ALP, PT, PTT, albumin use, PPI, PAD and furosemide. These candidate factors were input into a multivariate Cox regression analysis, and eight risk factors were found, including age (hazard ratio [HR]=1.022, 95%Confidence interval [CI]=1.0061.037, P=0.006), mean heart rate (HR=1.013, 95%CI=1.0031.023, P=0.010), SOFA (HR=1.057, 95%CI=0.9981.119, P=0.059), RDW (HR=1.056, 95%CI=0.9941.122, P=0.078), albumin use (HR=1.428, 95%CI=1.0132.011, P=0.042), MAP (HR=0.982, 95%CI=0.9670.998, P=0.031), mean temperature (HR=0.731, 95%CI=0.5540.996, P=0.027) and PPI use (HR=0.702, 95%CI=0.5000.985, P=0.041). The results of the Cox regression analysis are shown in Table 4. The SOFA score and RDW were considered clinically significant for the prognosis of patients with cirrhosis and HE based on previous literature reports [22, 23] and clinical experience, so they were also included in the final prediction model.
Based on the multivariate Cox regression analysis results, a nomogram about the 90-day survival rate of patients with liver cirrhosis and HE was constructed, as shown in Fig.2. The nomogram indicated that age, higher SOFA score, higher RDW, higher mean heart rate, lower MAP, lower mean temperature, and the use of albumin were risk factors for the prognosis of patients, and the use of PPI was a protective factor.
Nomogram for predicting the 90-day probability of survival from liver cirrhosis with hepatic encephalopathy. MAP, Mean arterial pressure; SOFA, Sequential organ failure assessment; RDW, Red cell distribution width; PPI.use, Proton pump inhibitors use
The new nomogram was tested on the proportional hazard hypothesis, and the results showed that the P values of each factor and the overall P value were greater than 0.05, which conformed to the proportional hazard requirement. Then, C-index was used to evaluate the effect of the nomogram, which found that this was higher for the nomogram than for the single SOFA model in both the training cohort (0.704 versus 0.615) and the validation cohort (0.695 versus 0.638). In addition, the AUC value of the new nomogram was greater than that of the single SOFA model, both in the training cohort and the validation cohort. The ROC results are shown in Fig.3.
ROC curves for the nomogram and the SOFA mode. a: Result of the training cohort; b: Result of the validation cohort
The NRI value for the 90-day nomogram was 0.560(95%CI=0.4470.792) in the training cohort and 0.364 (95% CI=0.0540.756) in the validation cohort. In addition, the 90-day IDI value was 0.119 (P<0.001) for the training cohort and 0.083 (P<0.001)for the validation cohort, respectively. The NRI and IDI values obtained in this study were greater than zero, which indicated that the overall performance of the nomogram was better than that of the SOFA model alone.
Figure4 shows the calibration curves of the training and validation cohort for the nomogram. The standard curve of the 90-day forecast probability of the nomogram was very close to the standard 45-degree diagonal line, and the relevant four tangent points were evenly distributed. The result showed that the new nomogram had excellent calibration capabilities.
Calibration curves. Calibration curves for the 90-day probability of survival from liver cirrhosis with hepatic encephalopathy depict calibration of nomogram in terms of the agreement between the predicted probabilities and observed outcomes of the training cohort (a) and validation cohort (b)
The DCA curves of the nomogram and the single SOFA model are shown in Fig.5. The results demonstrated that the 90-day DCA curve of the nomogram produced a net benefit regardless of whether it was in the training cohort or the validation cohort, and the DCA curves of the nomogram were all enhanced, compared with the single SOFA model.
Decision curve for the new nomogram for 90-day prediction of survival probability in the training cohort (a) and validation cohort (b)
The rest is here:
Establishment and validation of a prognosis nomogram for MIMIC-III ... - BMC Gastroenterology
Argentinian oil company to start mining crypto with gas power leftovers – Cointelegraph
A Buenos Aires-headquartered oil company, Tecpetrol, has decided to convert excessive gas into energy for cryptocurrency mining.
As reported by local media on Sept. 24, Tecpetrol will launch its first gas-powered crypto mining facility in the Los Toldos II Este region, located north of Vaca Muerta in Argentine Patagonia. The company claims its approach would allow it to advance its crude oil production project and optimize gas utilization, thereby reducing waste.
Related: Stronghold requests permission to burn tires for crypto mining in Pennsylvania
The company is planning to drill at least 35,000 barrels of oil daily at the facility, but, given the absence of infrastructure to consume the gas being released in the process, it decided to explore crypto mining as a strategic choice to consume it. As Tecpetrol CEO Ricardo Markous explained:
Tecpetrol hopes to commence the crypto mining between late October and early November. The primary goals are to reduce environmental impact by avoiding gas emissions and to generate some additional profits. The company has already signed contracts and is collaborating with an unnamed firm that has experience implementing similar strategies in the United States.
A recent paper published by the Institute of Risk Management states that Bitcoin (BTC) mining can reduce global emissions by up to 8% by 2030 by converting the worlds wasted methane emissions into less harmful emissions. The report cited a theoretical case saying that using captured methane to power Bitcoin mining operations can reduce the amount of methane vented into the atmosphere.
Magazine: Are DAOs overhyped and unworkable? Lessons from the front lines
Additional reporting: Ray Jimenez Bravo, Mariuscar Goyo
See the article here:
Argentinian oil company to start mining crypto with gas power leftovers - Cointelegraph
Top 5 Courses to Enhance Your Skills as a Computer Science … – Analytics Insight
Top 5 courses to enhance computer science skills for any computer science engineer
You are at the cutting edge of technical progress as a Computer Science Engineer. Your computer science knowledge provides the foundation for comprehending complicated systems and addressing complex challenges. With the emergence of Machine Learning, you now have the opportunity to explore the world of artificial intelligence by developing algorithms that can learn from and make judgments based on data. Cloud computing provides you with the capabilities to create, deploy, and scale applications that reach people all over the world. And, as the importance of cyber security grows, you will play a critical part in defending these systems and data from threats, ensuring the digital world remains safe and secure.
Staying competitive in computer science requires upskilling. Five cutting-edge courses for computer science professionals range from artificial intelligence and cybersecurity to web development, cloud computing, and data science.
AI and machine learning are rapidly expanding technologies with numerous applications. Machine intelligence (AI) is the ability of machines to replicate human intelligence, whereas machine learning (ML) is a subset of AI that allows machines to learn without being explicitly programmed. Artificial intelligence and machine learning are being applied in a range of areas, including healthcare, finance, transportation, manufacturing, and retail. Medical diagnosis, fraud detection, self-driving cars, product suggestions, chatbots, content moderation, and so on are some of the specialized uses of AI and ML. The breadth of AI and machine learning is constantly expanding, and it is likely to have a significant impact on how we live and work in the future.
Cybersecurity is the practice of defending systems, networks, and data from digital threats. Ethical hacking is the activity of testing procedures and networks for vulnerabilities safely and legally. The field of cybersecurity and ethical hacking is vast and ever-changing. Hackers are constantly discovering new techniques to exploit weaknesses, and enterprises must stay ahead of the curve. Penetration testing, social engineering, malware analysis, incident response, security architecture, risk management, and other fields of cybersecurity and ethical hacking are examples.
The process of developing both the front-end and back-end of a web application is known as full-stack web development. It means that full-stack developers are in charge of the applications user interface (UI), user experience (UX), and underlying code. The scope of full-stack web development is broad, encompassing a diverse set of technologies and expertise. HTML, CSS, JavaScript, PHP, MySQL, Node.js, React, Angular, Django, Ruby on Rails, and other technologies are commonly utilized in full-stack web development.
Cloud computing is the supply of computer services over the Internet (the cloud), including servers, storage, databases, networking, software, analytics, and intelligence. DevOps is a collection of methods that integrates software development (Dev) and IT operations (Ops) to shorten the systems development life cycle and ensure high-quality continuous delivery. The breadth of cloud computing and DevOps is wide and expanding all the time. Businesses of all sizes are turning to cloud computing to better their agility, scalability, and cost-efficiency. DevOps is being utilized to increase software delivery speed, reliability, and security. Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), Continuous integration and continuous delivery (CI/CD), Containerization, Microservices, Agile development, and Security are some of the specialized aspects of cloud computing and DevOps.
Data science is a field that extracts knowledge and insights from data using scientific methods, procedures, algorithms, and systems. Big data analytics is a subset of data science that deals with massive and complex dataset analysis. The breadth of data science and big data analytics is enormous and expanding all the time. Businesses of all sizes are utilizing these domains to improve decision-making, optimize operations, and invent new goods and services. Data mining, machine learning, natural language processing, predictive analytics, text mining, visual analytics, and social media analytics are some of the specialized disciplines of data science and big data analytics.
View original post here:
Top 5 Courses to Enhance Your Skills as a Computer Science ... - Analytics Insight
Why Palantir (PLTR) Stock Is Trading Up Today By Stock Story – Investing.com
What Happened:Shares of data-mining and analytics company Palantir (NYSE:PLTR) jumped 5.23% in the morning session after the announcement that it secured a $250 million firm-fixed-price contract earmarked for research and development efforts in artificial intelligence (AI) and machine learning (ML).
The Army Contracting Command, based in Aberdeen Proving Ground, Maryland, will oversee the contracting activity, with the contract expected to be completed by September 25, 2026.
Is now the time to buy Palantir? Find out by reading the original article on StockStory.
What is the market telling us:Palantir's shares are very volatile and over the last year have had 43 moves greater than 5%. In context of that, today's move is indicating the market considers this news meaningful but not something that would fundamentally change its perception of the business.
The previous big move we wrote about was 27 days ago, when the company dropped 5.27% on the news that Morgan Stanley analyst Keith Weiss downgraded the stock's rating from Equal-Weight (Hold) to Underweight (Sell) with a $9 price target. The price target implied a potential 45% downside from where shares were traded when the downgrade was announced. This action was driven by concerns surrounding the near-term outlook of Palantir's AI products, and a valuation premium that poses an unfavorable risk-reward. Weiss added that visibility into the new Artificial Intelligence Platform (AIP) remained low, raising worries regarding the government segment's ability to offset a potential decline in revenues, as estimates have baked in a growth acceleration in the second half of the year.
It is worth recalling that, during the Q2'2023 earnings call, management provided some insights into the progress made so far on the new AI platform, stating, "We have no problem monetizing. We make $2.9 million per commercial customer across the world. We will figure out how to monetize it. First, we're teaching the market what it is. We're getting people on Board."
Weiss's downgrade shows that Morgan Stanley thinks there is too much good news priced into the stock with regards to Palantir's AI prospects without enough concrete evidence. This is a major flashpoint in the markets today as investors debate how large the AI opportunity will be, when it will be realized, and who the true winners and beneficiaries are.
Palantir is up 133% since the beginning of the year, but at $14.87 per share it is still trading 25.6% below its 52-week high of $19.99 from July 2023. Investors who bought $1,000 worth of Palantir's shares at the IPO in September 2020 would now be looking at an investment worth $1,570.
Read market moving news with a personalized feed of stocks you care about.
Get The App
More here:
Why Palantir (PLTR) Stock Is Trading Up Today By Stock Story - Investing.com
Unlock potential growth opportunities with the BSN Information … – Premium Times
The BSN Information Management Mastering Programme is a specialised educational curriculum designed to equip professionals with the knowledge and skills needed to effectively manage and utilise information within organisations.
Why should you choose the BSN Information Management Mastering Programme?
Information Management Principles: Gain a solid understanding of the fundamental principles and concepts of information management.
Information Technology Infrastructure: Learn about the components and architecture of information technology infrastructure, including networks, databases, and others.
Data Management and Analytics: Develop skills in managing and analyzing data, including data modelling, database design, data integration and others.
Information Systems Development: Learn the processes and methodologies involved in developing information systems.
Business Intelligence and Decision Support: Explore techniques and tools for business intelligence and decision support, such as data mining, data warehousing and others.
Remember, with dedication, perseverance, and a passion for learning, you have the power to achieve great things in the field of Information Management.
For more information, kindly click on the link below:https://forms.office.com/r/VKkFKG3BXZ
TEXT AD: Call Willie - +2348098788999
Read the rest here:
Unlock potential growth opportunities with the BSN Information ... - Premium Times
Reproducibility in pharmacometrics applied in a phase III trial of … – Nature.com
Transfer of clinical trial data
Instead of the conventional approach of receiving data for pharmacometric analysis after the trial completion, an iterative data transfer and reproducible data handling workflow was developed by consensus between the clinical, data management, and pharmacometric teams who collaborated in this clinical trial. Data QC and review responsibilities were shared between the clinical, data management, and pharmacometric teams. Figure2 shows the participation in the trial from start of enrolment to final visit. First data was transferred once data management procedures were developed, which occurred as early as 16% enrolment. In total, 41 data transfers occurred on average every 1.8weeks. The Screening and enrolment database was locked within three weeks after enrolment completed. Trial participation showed a slight decrease between the last participant in and first end of study visit (first participant out, April 2021) due to withdrawal and lost to follow-up or death (n=11 and n=2, respectively, in that interval). The full database was locked within 5 working days after trial completion, after which unblinding took place.
Overview of the data management throughout the clinical trial. Number of participants on trial over time is shown in purple solid line, database locks (n=2) are shown in black dashed lines, scheduled data review (n=41) are shown as magenta top rug plot, first reported diagnosed COVID-19 case in South Africa is shown as grey bottom axis mark for reference.
The frequent interim data QC by the clinical, data management, and pharmacometric teams was a time-saving investment. All records were subject to check after entry into the eCRF, and the clinical QC and data management combined found in 20.9% of the records that a correction was needed when the eCRF was compared to the paper source document. A total of 201 queries accounting for 10.7% of total records were found by the pharmacometrics team and resolved while the trial was ongoing. The last data check after the trial completed only resulted in 4 additional queries which were resolved in two days, after which the data could be locked. In addition to saving time after study completion, addressing queries while the study is still ongoing was also found to be advantageous because an incorrect measurement (e.g. weight) can still be re-measured and recorded. Pharmacometric analysis (magenta hexagon in Fig.1) could commence practically immediately after trial completion because of this streamlined review process (other magenta, purple, and black boxes in Fig.1). Best practices and examples of the data review are described below.
Data review was challenging because of the large size of the database. The full database consisted of 20,457 records. Figure3 shows the database architecture including number of records per master database. The four master databases Enrolment/screening, Events, Lab results, and Follow-up, contained 24, 13, 12, and 10 datasets in .dat format, respectively (Supplementary Table I). Each dataset came with a metadata file in .stsd format reporting on each variable, possible values, and units. All records were linked through the participants ID (n=1000) in the integrated database.
Number of eCRFs submitted per master database. Dataset architecture consisted of 4 master databases (Screening/enrolment, Events, Lab results, and Follow-up) for which the number of records is shown.
Most important in the pharmacometric data review was the understanding of the clinical meaning of the data entries. For example, COVID-19 was defined as a symptomatic disease with confirmed SARS-CoV-2 infection. As such, a COVID-19 event with a health status score of 0, or a polymerase chain reaction (PCR) confirmed asymptomatic SARS-CoV-2 infections with a health status of 1 or higher, would result in a query directed to the clinical team on how to interpret these results. The records would subsequently be corrected in the next data transfer for the health status score to reflect the event definition. Another example was post viral syndrome, i.e. long COVID. A record without a preceding COVID-19 event would also result in a query.
The consistency review between the different databases and datasets mostly focussed on the Events master database. The weekly health status score was captured in two different datasets; in the original Events dataset for the first observation(s) and thereafter in the Follow-up dataset. In the integrated datasets, these weekly health status scores were merged and checked for consistency. Where different health status scores for a single week were reported, or where the number of weekly scores did not equal the number of weeks an event was ongoing, a query was opened. Each event had a unique event number, so duplicate event numbers were flagged to the data management team. Consistency between Follow-up and Events master databases was important because participants self-reported COVID-19 events during the follow-up contact, which would result in a record in the Events master database when symptomatic. Consistency of dates between the Lab, Follow-up, and Events master databases was checked to prevent ongoing events after trial completion.
Records were checked for missing or not applicable (NA) values. Additionally, dates (negative timepoints, the same record with different dates), MedDRA event descriptions, and spelling were checked. Spelling was a noteworthy issue where COVID-19 was recorded with 63 different spelling alternatives, including COVID-19, COVID 19, COVID, COVID-19 infection, COVID-19 pneumonia, COVID-19 respiratory tract infections, while post COVID viral syndrome was recorded in 10 different alternatives. Therefore, the MedDRA term initially utilized, but unfortunately also contained two alternative spellings for both. From this insight, the MedDRA numerical codes were included into the data processing.
The initial analysis workflow evolved over time with new information and methods arising during the pandemic which were unknown at database setup. Post viral syndrome after COVID-19, also coined long COVID34,35,36, was one example, which was first reported on trial in August 2020. Discussions on long COVID developed around two points. First, the link between COVID-19 and long COVID was important to be established, by assigning those events the same event number. Second, long COVID could very well last longer than the maximum 12weeks for which the eCRF was equipped. An additional data field was incorporated to record health status scores needed after week 12. Measurement of SARS-CoV-2 antibodies was approved by South African regulators in August 2020, and first results were reported to site in October 2020. This led to discussions around participants who were SARS-CoV-2 antibody positive at baseline, participants who were SARS-CoV-2 antibody negative after confirmed COVID-19, and on how to interpret reversal of seroconversion from positive to negative. Globally, SARS-CoV-2 specific vaccinations were first approved in December 2020 but became only available in South Africa in February 2021. Understandably, health care workers were among the first to be vaccinated with specific COVID-19 vaccines, which needed to be recorded in the database for appropriate censoring in the pharmacometric analyses. Regarding handling of events ongoing after the final (week 52) study visit consensus was reached to allow ongoing events after the final visit if the event was an important endpoint of the trial, for example COVID-19 events or respiratory tract infections in general that were symptomatic at the final visit. Acute events would be followed-up until resolution of symptoms, while chronic events like post viral syndrome would not.
Interoperability between members of the pharmacometric team was essential to divide the work with the short timelines. The pharmacometric processing script was stored in a private Github repository where multiple coders could work simultaneously. Through Github, changes to parts of the script by team members could be reviewed and incorporated into an updated version, all while tracking these changes and being able to revert to an earlier version in case of debugging. Additionally, the file structure between pharmacometricians was standardized, so only the path to the working directory needed to be changed relative to which all other files were inputted or outputted. The path to the working directory was automatically called at the start of the script based on an if-statement with the systems info of the users machine (Fig.4). Interoperability was also improved by using clear, transparent, and well commented coding. The Tidyverse packages including the magrittr pipe operator (%>%) allowed for better readable and interpretable code25,27. Interoperability between data management and pharmacometric teams was ensured by naming standards for the four master databases.
Interoperability through standardized file structure and automatic extraction of working directory using the systems info. The ifelse() statement can be expanded with nested ifelse() statement for more collaborators.
The pharmacometric team prepared the data reports for the DSMB to review the safety and efficacy of the ongoing trial. Because of the time-sensitive nature of the vaccination trial, initially biweekly reporting was proposed, which was later amended to a lower frequency by request of the DSMB and the clinical team because of reduced clinical urgency. Two types of reports were prepared. The open report showed the data aggregated which was open to review for the whole clinical trial, while the closed report showed the blinded data per study arm for the closed session of the DSMB. The pharmacometric processing script was developed to automatically generate a report based on the integrated database, to prevent repetitive manual report drafting with the suggested frequency. Using this method, a transparent and reproducible workflow was established from the raw eCRF input through to the DSMB report. RMarkdown was used to integrate the R-based processing of the integrated database with Markdown and LaTeX text compilers to create a report in pdf format in which the numerical, graphical, and tabular elements were automatically updated with each compilation (Fig.5A).
RMarkdown was used to combine text and R variables in the automatically generated report. (A) In-line calling of R variables to include them in a written sentence, (B) R variable CLOSED was used to switch between open and closed reporting using if-statements for tables and graphs called in R-chunks or (C) called in in-line R calls.
To create the two versions of the report in a consistent manner, an R-variable was integrated into the relevant numerical, graphical, and tabular elements where aggregated or per-arm data was reported. This had the advantage of not having to work in two RMarkdown scripts at the same time with the risk of inconsistencies and code conflicts that occur when coding even when working as diligently as possible. As a result, the open and closed reports showed the exact same data with the only difference being the presentation of these data. The switch-variable (CLOSED) was used in if-statements throughout the report to show figures and tables either aggregated or per arm (Fig.5B), as well as in R-code that was called in-line in the RMarkdown file (Fig.5C).
When the DSMB meeting schedule was set, a corresponding data transfer schedule was set. On average, the DSMB received the compiled and reviewed report within 3days after the cut-off date of the data, including the final unblinded report. The DSMB repeatedly expressed their appreciation for these excellent turnaround times.
The pharmacometric processing script was also developed to include the pharmacometric analysis dataset creation. This resulted in a transparent, traceable, and version-controlled workflow from the raw eCRF input data to the analysis dataset in NONMEM format. Moreover, because the same script and integrated database was utilized to that aim, the datasets were consistent with the figures and tables in the DSMB reports.
The reproducible workflow and subsequent confidence in handling of the data allowed for preparation of the pharmacometric analysis of the primary and secondary endpoints while the trial was still ongoing. Based on interim graphical exploration of the data, modelling strategies were developed per endpoint including which functions to test. Model scripts were written, tested, and code reviewed before the data lock. Analysis of the primary endpoint had the highest priority. Because of the reproducible workflow and preparations for the pharmacometric analysis before the data lock, the primary endpoint analysis was completed and reviewed within three days after data lock and unblinding, and shared with the DSMB and the clinical team. Analysis of the secondary endpoints, including a total of 7 time-to-event analyses for COVID-19, RTI, and hospitalization due to all causes in both intention-to-treat and per-protocol datasets, as well as an exploratory time-to-SARS-CoV2 specific vaccination analysis, was completed and reviewed within two weeks after data lock and unblinding, and presented to the DSMB and the clinical team. As we focus here on the reproducible pharmacometrics workflow, the results of these analyses are out of scope and reported separately.
Go here to see the original:
Reproducibility in pharmacometrics applied in a phase III trial of ... - Nature.com
Machine learning-based optimal crop selection system in smart … – Nature.com
The authors divided the proposed model into two stages, i.e., phase 1 is deployed for weather prediction, whereas phase 2 is used for identifying the optimal crop. In this work, phase 1 is implemented using Recurrent Neural Networks (RNN) and in phase 2, Random Forest Classification is used for crop selection using predicted weather and soil parameters of the region. As the weather of any area for a day is also influenced by the weather conditions of the past day/s; so, RNN is observed as the most suitable ML algorithm for weather prediction in this work. In RNN, the output of a few layers act as input to the previous layer which eases the mapping of the problem of weather prediction to this algorithm. In this work, the authors used LSTM RNN due to weather dependencies on previous conditions. For appropriate crop selection, multiple weather and soil parameters are considered in this work which requires the combination of multiple decision trees to be considered, and in such cases random forest classifier can play a significant role in the problem domain.
The flowchart for the proposed work is presented in Fig.2.
Flow chart: Proposed methodology.
For this work, the authors considered the Telangana state of India, shown in Fig.3. For weather prediction, data were collected personally from the data center of the National Remote Sensing Agency (NRSA), Hyderabad for the years 20152020.
Telangana state of India: Area considered for this study.
The dataset comprised of approximately 1993 records. The features of the data were temperature, wind speed and its direction, humidity, sun hours, etc. The units of temperature, windspeed, and rainfall were Celsius, km/hour, mm respectively. In the very first step, the authors pre-processed the data to fill in the missing values, transformation, and normalization. The missing values were handled using the linear interpolation technique. Afterward, the data were processed to obtain the minimum and maximum values of the temperature for each day from the respective features. Subsequently, the authors converted the dataset to a common scale using minmax normalization30. Data preprocessing is done using pandas, scikit-learn, and minmax scalar libraries of python.
For the processed data, RNN- an advancement of neural network (NN), is used to administer sequence dependence. In RNN, the input of the current step is taken from the output of the previous step, which aids the hidden state of RNN to remember the order. But, in conventional NN, every data element is taken as an independent entity. In the proposed work, long-short-term memory (LSTM) feedback-integrated RNN is used for the deployment of the model. A single node of the LSTM mesh is shown in Fig.4.
A single node of LSTM mesh.
Every LSTM node comprises three different gates, i.e., Input (I), Output (O), and Forget (F) gate. The revised input at present time-stamp T is presented by (G.) The actual value of any of these gate's banks on the preceding hidden state ({H}_{I-1}) and present input ({X}_{T}), presented as below31,32.
$$I_{{T{ }}} = F{ }left( {{ }W_{{I{ }}} X_{{T{ }}} + { }U_{I} { }H_{T - 1} + { }B_{F} } right)$$
(1)
$$F_{{T{ }}} = F{ }left( {{ }W_{F} X_{T} + { }U_{F} { }H_{T - 1} + { }B_{F} } right)$$
(2)
$$G_{T} = tanh left( { W_{C} X_{T} + U_{C } H_{T - 1 } + B_{C } } right)$$
(3)
The revised value at the node is determined as:
$$C_{{T{ }}} = { }I_{T} .G_{T} + { }F_{{T{ }}} .C_{T - 1}$$
(4)
The gate value is derived from the node state, preceding output, and present input to the node.
$$O_{T} = F{ }left( {W_{{O{ }}} X_{T} + U_{{O{ }.{ }}} H_{T - 1} + { }V_{O} .{ }C_{T} + B_{O} } right)$$
(5)
$$H_{T} = { }O_{T} .tanh (C_{T} )$$
(6)
In this work, the authors have trained three RNN models, i.e., one for Min. Temp., one for Max. Temp., and one to predict the rainfall in the region. To predict the Max. Temp., the authors used the dataset for the years 20152018. Data for 20192020 is used to test and verify the model's accuracy. The data from the set is organized using vector as {X(1) , X(K}3. For both types of forecasting, i.e., seasonal and 90days prior, the dataset is mapped to an N X M matrix where in each row there is one input feature and 90 target values which are expressed as.
$$left{ {Xleft( T right),{ }X{ }left( {T + 1} right),{ }Xleft( {T + 2} right), ldots X{ }left( {T + 90} right)} right}$$
(7)
In the above discussion, K represents the size of the time series data. For example, k-90 represents that the dimension N and dimension M is 91 as the weather prediction is made by the LSTM, the model consists of an input layer, a hidden layer, and an output layer. The hidden layer is comprising 4 LSTM nodes. Min. Temperature and rainfall are also predicted similarly. The Pseudo code for weather prediction is shown in Fig.5. Various data features from the data are fed to the input layer. The input layer processes the data and forwards it to the middle layer. The middle layer comprises many hidden layers. Each hidden layer has its own activation function, bias, and weight. Due to the dependency of weather conditions on past data, in this work, LSTM-RNN is used.
Pseudo code for weather prediction.
The results obtained using RNN are compared with ANN and found to be more accurate.
As mentioned in Section Weather prediction, in this work, the Telangana state is considered for weather, soil, and crop data analysis. The geographical area of the state is classified into three agro-climatic belts, i.e., north Telangana, southern Telangana, and central Telangana. Different areas have different soil features. In north Telangana, the soil is red, shallow black, and profoundly calcareous. Southern Telangana comprises various textures of red soil, alluvial, and calcareous soil. Central Telangana is covered with red and calcareous soil33. The land is rated as low, medium, or highly fertile depending on its nutrient index. The main crops grown in the state are maize, rice, chilli, cotton, and soybean. Mainly these five crops are harvested over almost 5054 thousand hectares of the total agricultural land. The input to the crop selection model comprises soil and weather parameters. The various soil parameters for the algorithms are its type, pH value, water-preserving capability, and fertility. These soil and weather parameters predicted in the first phase, are collectively utilized to decide the appropriate crop for land. Before passing to the model, various categorical parameters, like type of soil, water capacity, fertility, etc. were encoded to the numerical values. The model can be utilized for both, i.e., seasonal and annual crop selection. For a season, the proposed model can recommend even more than one suitable crop and its requirements likewise water, and suggests an appropriate time to sow the crops. The example dataset utilized in the proposed model is presented in Table 2.
A total of 10 crops of the state of Telangana are considered in the proposed model. These crops are soybean, castor, green gram, sunflower, red gram, maize, chilli, cotton, jowar, and rice. But the model can be mapped for any number of crops and type of land. This work applies the random forest classification technique for reasonable crop prediction considering soil and water parameters. The random forest classifier uses a set of decision trees developed from a subset of training data. This classifier aggregates the output from each decision tree to decide the outcome. The class with the maximum number of votes is considered the outcome of the algorithm. The model is customized to check for the suitability of more than one crop for a particular land which is implemented using a threshold value. A crop is included in the list of appropriate crops if any decision tree using random forests presents the same crop as output with a value more than the threshold value.
$$Th = 2{*}left( {frac{Number;of;trees;generated;with;Random;Forest;Classifier}{{Number;of;Classes}}} right)$$
(8)
The pseudocode for crop selection is shown in Fig.6.
Pseudo code for crop selection.
The output received with sample data using the classification algorithm is presented in Table 3.
View post:
Machine learning-based optimal crop selection system in smart ... - Nature.com
Sekur Private Data Ltd Announces Advertising Partnership and … – Best Stocks
On September 26, 2023, Sekur Private Data Ltd. announced a groundbreaking advertising partnership and distribution agreement with VYRE Network, a renowned global news-based streaming channel and website. With a massive viewership of over 1 million people worldwide, VYRE Network offers an ideal platform for Sekur to reach its target audience.
Under this agreement, Sekur will have the opportunity to showcase its commercials and banner ads on the streaming news website. In return, VYRE Network will receive a share of the revenue generated from all Sekur signups. This strategic collaboration primarily aims to attract enterprise and investor audiences, aligning perfectly with Sekurs business goals.
This partnership is a crucial step in Sekurs comprehensive strategy to introduce its SMB and Enterprise solutions to the vast market of small businesses in the United States, which exceeds 30 million. Sekur is a trusted provider of cybersecurity and internet privacy solutions, offering Swiss-hosted services that ensure secure communications and data protection.
Sekurs suite of secure and private communication services is designed to address the growing concerns regarding data mining and vulnerabilities present in many existing solutions. By partnering with VYRE Network, Sekur aims to further strengthen its position as a leading provider of secure and private communication solutions.
In addition to the collaboration with VYRE Network, Sekur has also joined forces with Digital Smart Solution Sarl (DSS), an esteemed IT services consulting company. This partnership will facilitate the distribution of SekurMessenger, one of Sekurs flagship products, to some of the most prominent companies in the telecommunications and banking industries in Morocco.
With these strategic alliances and partnerships, Sekur is poised to expand its reach and establish itself as a global leader in cybersecurity and internet privacy. The companys commitment to providing secure and reliable solutions sets it apart in an increasingly interconnected world.
Current $0.14
Concensus $0.00
Low $0.00
Median $0.00
High $0.00
Social Sentiments We did not find social sentiment data for this stock
Analyst Ratings There are no analysts data to display
On September 26, 2023, SWISF stock had a mixed performance. The previous close was $0.13, and it opened at $0.14. Throughout the day, the stock fluctuated between $0.13 and $0.15. The trading volume for the day was 33,928 shares, lower than the average volume over the past three months. SWISF has a market capitalization of $16.7 million, indicating it is a relatively small company. The company experienced significant revenue growth of 208.62% in the previous year, reaching $356.6 thousand in annual revenue. However, it reported an annual profit of -$4.9 million, resulting in a net profit margin of -1,376.32%. There is no available data on SWISFs earnings growth for this year or the next five years. The stock does not have a P/E ratio available, but it has a price/sales ratio of 13.20 and a price/book ratio of 3.02. There is no information available on SWISFs competitors. The next reporting date and EPS forecasts for the current quarter are unknown. Investors should closely monitor future updates and financial reports to gain a better understanding of SWISFs prospects.
On September 26, 2023, Sekur Private Data Ltd (SWISF) stock experienced a significant increase in its price. The one analyst offering 12-month price forecasts for the company has a median target of 0.52, with both the high and low estimates also at 0.52. This median estimate represents a remarkable 268.17% increase from the last recorded price of 0.14.
The positive outlook for SWISF stock is further supported by the consensus among one polled investment analyst, who recommends buying the stock. This rating has remained steady since May, indicating a consistent belief in the companys growth potential.
Unfortunately, specific details regarding SWISFs current quarter earnings per share and sales are not available at this time. The reporting date is also yet to be determined. However, the overall positive sentiment surrounding the stock suggests that investors are optimistic about its future performance.
Despite the lack of detailed information on SWISFs recent performance, the significant increase in the median price target indicates that analysts are confident in the companys growth prospects. This positive sentiment could be driven by various factors, such as strong financials, innovative products or services, or positive industry trends.
Investors considering SWISF stock should carefully evaluate the companys fundamentals, including its financial health, competitive position, and growth strategy. Additionally, staying updated on any news or developments that may impact the companys performance is crucial.
In conclusion, SWISF stock showed promising performance on September 26, 2023, with a significant increase in its price. The median target price forecasted by analysts suggests a substantial potential for growth. However, investors should conduct thorough research and analysis to make informed investment decisions, considering both the positive outlook and the inherent risks associated with investing in the stock market.
Read more from the original source:
Sekur Private Data Ltd Announces Advertising Partnership and ... - Best Stocks