Page 3,270«..1020..3,2693,2703,2713,272..3,2803,290..»

Researcher Publishes Never Before Seen Emails Between Satoshi Nakamoto and Hal Finney – Bitcoin News

Just recently three previously unpublished emails from Bitcoins inventor, Satoshi Nakamoto, have been made public. The emails reveal the correspondence between Satoshi and the early Bitcoin developer Hal Finney. The communications between Nakamoto and Finney stem from November 2008 and January 2009, the very month Bitcoin was launched.

On November 27, three emails that have never been seen before were made public in an editorial written by Michael Kaplikov, a professor at Pace University. According to Kaplikov, the emails derived from the New York Times contributor Nathaniel Popper. The NYT journalist also wrote the book Digital Gold and Hal Finneys wife Fran Finney gave Popper the emails at this time. Kaplikov published the emails alongside his editorial after confirming that the emails were indeed legitimate, and stemmed from the now-deceased Hal Finneys old computer.

The first email is dated November 19, 2008, which was nineteen days after Bitcoins mysterious creator published the white paper. Kaplikov, who has been studying the Bitcoin origin story, said that before the email, Nakamoto shared an early version of the Bitcoin codebase with a few people including Hal Finney. The early release origin story is well known, as Ray Dillinger and James A. Donald also received pre-release copies. In the email, Finney asked Satoshi about the number of nodes and scaling the Bitcoin network.

Some of the discussion and concern over performance may relate to the eventual size of the P2P network, Finney wrote to Nakamoto. How large do you envision it becoming? Tens of nodes. Thousands? Millions? And for clients, do you think this could scale to be usable for close to 100% of the worlds financial transactions? Or would you see it as mostly being used for some core subset of transactions that have special requirements, with other transactions using a different payment system that perhaps is based on Bitcoin?

The researcher from Pace University also highlighted that soon after this particular email, Bitcoins creator allowed Finney commit access to the Sourceforge repository. Then another email dated January 8, 2009, shortly after the network was launched, Satoshi wrote to Hal. Thought youd like to know, the Bitcoin v0.1 release with EXE and full sourcecode is up on Sourceforge, Nakamoto wrote. The creator also detailed that release notes and screenshots were also uploaded to the web portal bitcoin.org. The very next day, Finney replied to Nakamotos release email.

Hi, Satoshi, thanks very much for that information, Finney said on January 9. I should have a chance to look at that this weekend. I am looking forward to learning more about the code.

The very next day, Hal Finney took to Twitter and told his followers he was running bitcoin. It seems Finney did get a chance to look at the code after his recent correspondence with Nakamoto. In addition to the three unpublished emails, Kaplikov also discussed the email correspondence between Finney and Nakamoto that was given to the Wall Street Journal back in 2014.

The reason for this is because Kaplikov discusses discrepancies with the emails timestamps. Kaplikov stresses that the January 2009 emails appear to be roughly eight hours ahead of Greenwich Mean Time (GMT). Just recently, new research from The Chain Bulletin contributor Doncho Karaivanov tried to pinpoint Satoshis home location by leveraging all his activity and scatter charts of all the timestamps.

Karaivanovs study assumes that Satoshi Nakamoto lived in London (GMT) when he/she or they created the Bitcoin project. However, studies from the past show that Nakamoto could have also resided in California on the west coast and some have asserted he lived on the eastern side of the United States. Moreover, it is also assumed in a few of the studies that Satoshi Nakamoto pulled a lot of all-nighters and crammed his work before he left the project.

Finney passed away on August 28, 2014, after suffering from complications from Amyotrophic lateral sclerosis (ALS). Bitcoiners and crypto proponents everywhere think of Finney in the highest regard, as he once said that the computer could help liberate people.

It seemed so obvious to me, Finney explained before his death. Here we are faced with the problems of loss of privacy, creeping computerization, massive databases, more centralization and [David] Chaum offers a completely different direction to go in, one which puts power into the hands of individuals rather than governments and corporations. The computer can be used as a tool to liberate and protect people, rather than to control them.

The recently published emails are interesting and give some new insight into the early relationship between Nakamoto and Finney. The emails and Finneys post on Twitter on January 10, clearly show he was very excited about this project and specifically made time available to look at Bitcoin right away. The email timestamps simply add more to the Satoshi Nakamoto identity mystery, and the uncertainty of the inventors whereabouts during the cryptocurrencys creation period.

What do you think about the email correspondence between Nakamoto and Finney? Let us know what you think about this subject in the comments section below.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

More:
Researcher Publishes Never Before Seen Emails Between Satoshi Nakamoto and Hal Finney - Bitcoin News

Read More..

OKExs Withdrawal Suspension Isnt Behind Bitcoins Rally: Analysts – CoinDesk – CoinDesk

Bitcoins price has been up dramatically since the very day popular exchange OKEx announced the suspension of all crypto withdrawal service on its platform. However, while some tie the two together, many market observers do not see a reason to associate the latest price rally with OKExs issues.

Bitcoin's latest rally came after OKEx's suspension on all crypto withdrawal.

While the price of bitcoin gained significantly since the market sell-off in March, the most recent bullish run began just as OKEx said it suspended all crypto withdrawals because one of its key holders has been out of touch.

However, the suspension of withdrawals on OKEx had little impact on bitcoins price over the past month, said Ki Young Ju, chief executive officer of CryptoQuant.

BTCs price on OKEx is not that different from other exchanges, he said. [P]eople can trade their BTC on OKEx despite the withdrawal suspension.

The Malta-based crypto exchange still remains the No. 1 position for bitcoin futures open interest, currently worth $1.22 billion, according to data source Skew.

OKEx said Thursday it will resume withdrawal service as soon as this week, after founder Mingxing Star Xu was said to have been released from police custody in China. Jay Hao, chief executive officer of OKEx, told CoinDesk its high open interest is a positive indicator for his company.

These are encouraging signs that confidence in the exchange remains high and I believe that even if some users decide to withdraw their funds [as soon as withdrawals are open], which is their total and absolute right, they will soon come back to OKEx, Hao said through a spokesperson on Telegram.

Bitcoins volume from miners to OKEx has also dropped to almost zero since the news came out, as data from Glassnode show.

The muted bitcoin transfer volume from miners to OKEx, whose users are largely Chinese, is in line with the argument that the price surge is partly due to drying up in supply. Miners in China are struggling to turn their bitcoin into cash because of a government crackdown on Chinese exchanges.

Darius Sit, founder of Singapore-based trading firm QCP, connects the situation for miners in China with the market, telling CoinDesk that instead of going to other platforms, miners may have been holding on to their bitcoins as prices continue to climb, causing a tightened bitcoin supply.

Yet, others have largely disagreed with such contentions, saying the supply of bitcoin affected by OKExs withdrawal suspension is relatively small.

As a class, miners arent that large a group of sellers, Ryan Watkins, bitcoin analyst at Messari, told CoinDesk in a Telegram message. [They are] definitely not enough to drive the price up as high as it is.

Instead, Watkins pointed out the recent bitcoin rally is mostly driven by the demand side, as institutional investors in North America have been buying bitcoin in large amounts.

The perfect timing of OKExs suspension and the price rally could be purely coincidental, Watkins added.

Data from Chainalysis also indicate that after mining pools stopped sending bitcoin to OKEx, their newly minted cryptocurrency instead flowed to Binance and Huobi, both of which are also widely used in China.

Binance, Huobi and OKEx in total received 46% of bitcoin sent to exchanges from mining pools in the past 12 months, according to a Nov. 12 report from Chainalysis.

Colin Wu, a journalist based in China who first reported the Chinese miners selling problem in his blog, told CoinDesk in a WeChat message that Western media outlets have largely exaggerated what he wrote, saying the difficulties Chinese miners have had selling bitcoin should have had a minor impact on the recent price rally.

The misunderstanding is that Chinese miners stopped selling coins and caused bitcoin to rise, which is illogical, Wu wrote in a tweet thread. They did not stop selling coins. It was just a little troublesome and the number of miners in China has been decreasing. Miners are moving to the United States and Kazakhstan.

The rest is here:
OKExs Withdrawal Suspension Isnt Behind Bitcoins Rally: Analysts - CoinDesk - CoinDesk

Read More..

Boosting Weather Prediction with Machine Learning – Eos

Today predictions of the next several days weather can be remarkably accurate, thanks to decades of development of equations that closely capture atmospheric processes. However, they are not perfect. Data-driven approaches that use machine learning and other artificial intelligence tools to learn from past weather patterns might provide even better forecasts, with lower computing costs.

Although there has been progress in developing machine learning approaches for weather forecasting, an easy method for comparing these approaches has been lacking. Now Rasp et al. present WeatherBench, a new data resource meant to serve as the first standard benchmark for making such comparisons. WeatherBench provides larger volume, diversity, and resolution of data than have been used in previous models.

These data are pulled from global weather estimates and observations captured over the past 40 years. The researchers have processed these data with an eye toward making them convenient for use in training, validating, and testing machine learningbased weather models. They have also proposed a standard metric for WeatherBench users to compare the accuracy of different models.

To encourage progress, the researchers challenge users of WeatherBench to accurately predict worldwide atmospheric pressure and temperature 3 and 5 days into the futuresimilar to tasks performed by traditional, equation-based forecasting models. WeatherBench data, code, and guides are publicly available online.

The researchers hope that WeatherBench will foster competition, collaboration, and advances in the field and that it will enable other scientists to create data-driven approaches that can supplement traditional approaches while also using computing power more efficiently. (Journal of Advances in Modeling Earth Systems (JAMES), https://doi.org/10.1029/2020MS002203, 2020)

Sarah Stanley, Science Writer

See the article here:
Boosting Weather Prediction with Machine Learning - Eos

Read More..

Machine learning – it’s all about the data – KHL Group

When it comes to the construction industry machine learning means many things. However, at its core, it all comes back to one thing: data.

The more data that is produced through telematics, the more advanced artificial intelligence (AI) becomes, due to it having more data to learn from. The more complex the data the better for AI, and as AI becomes more advanced its decision-making improves. This means that construction is becoming more efficient thanks to a loop where data and AI are feeding into each other.

Machine learning is an application of AI that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. As Jim Coleman, director of global IP at Trimble says succinctly, Data is the fuel for AI.

Artificial intelligence

Coleman expands on that statement and the notion that AI and data are in a loop, helping each other to develop.

The more data we can get, the more problems we can solve and the more processing we can throw on top of that, the broader set of problems well be able to solve, he comments.

Theres a lot of work out there to be done at AI and it all centres around this notion of collecting data, organising the data and then mining and evaluating that data.

Karthik Venkatasubramanian, vice president of data and analytics at Oracle Construction and Engineering agrees that data is key, saying: Data is the lifeblood for any AI and machine learning strategy to work. Many construction businesses already have data available to them without realising it.

This data, arising from previous projects and activities, and collected over a number of years, can become the source of data that machine learning models require for training. Models can use this existing data repository to train on and then compare against a validation test before it is used for real world prediction scenarios.

There are countless examples of machine learning at work in construction with a large number of OEMs having their own programmes in place, not to mention whats being worked on by specialist technology companies.

One of these OEMs is USA-based John Deere. Andrew Kahler, a product marketing manager for the company says that machine learning has expanded rapidly over the past few years and has multiple applications.

Machine learning will allow key decision makers within the construction industry to manage all aspects of their jobs more easily, whether in a quarry, on a site development job, building a road, or in an underground application. Bigger picture, it will allow construction companies to function more efficiently and optimise resources, says Kahler.

He also makes the point that a key step in this process is the ability for smart construction machines to connect to a centralised, cloud-based system John Deere has its JDLink Dashboard, and most of the major OEMs have their own equivalent system.

The potential for machine learning to unlock new levels of intelligence and automation in the construction industry is somewhat limitless. However, it all depends on the quality and quantity of data were able to capture, and how well were able to put it to use though smart machines.

USA-based Built Robotics was founded in 2016 to address what they saw as gap in the market the lack of technology being used across construction sites, especially compared to other industries. The company upgrade construction equipment with AI guidance systems, enabling them to operate fully autonomously.

The company typically works with equipment comprising excavators, bulldozers, and skid steer loaders. The equipment can only work autonomously on certain repetitive tasks; for more complex tasks an operator is required.

Erol Ahmed, director of communications at Built Robotics says that founder and CEO Noah Ready-Campbell wanted to apply robotics to where it would be really helpful and have a lot of change and impact, and thus settled on the construction industry.

Ahmed says that the company are the only commercial autonomous heavy equipment and construction company available. He adds that the business which operates in the US and has recently launched operations in Australia is focused on automating specific workflows.

We want to automate specific tasks on the job site, get them working really well. Its not about developing some sort of all-encompassing robot that thinks and acts like a human and can do anything you tell it to. It is focusing on specific things, doing them well, helping them work in existing workflows. Construction sites are very complicated, so just automating one piece is very helpful and provides a lot of productivity savings.

Hydraulic system

Ahmed confirms that as long as the equipment has an electronically controlled hydraulic system converting a, for example, Caterpillar, Komatsu or a Volvo excavator isnt too different. There is obviously interest in the company as in September 2019 the company announced it had received US$33 million in investment, bringing its total funding up to US$48 million.

Of course, a large excavator or a mining truck at work without an operator is always going to catch the eye, and our attention and imagination. They are perhaps the most visual aspect of machine learning on a construction site, but there are a host of other examples that are working away in the background.

As Trimbles Coleman notes, I think one of the interesting things about good AI is you might not know whats even there, right? You just appreciate the fact that, all of a sudden, theres an increase in productivity.

AI is used in construction for specific tasks, such as informing an operator when a machine might fail or isnt being used productively to a broader and more macro sense. For instance, for contractors planning on how best to construct a project there is software with AI that can map out the most efficient processes.

The AI can make predictions about schedule delays and cost overruns. As there is often existing data on schedule and budget performance this can used to make predictions and these predictions will get better over time. As we said before; the more data that AI has, the smarter it becomes.

Venkatasubramanian from Oracle adds that smartification is happening in construction, saying that: Schedules and budgets are becoming smart by incorporating machine learning-driven recommendations.

Supply chain selection is becoming smart by using data across disparate systems and comparing performance. Risk planning is also getting smart by using machine learning to identify and quantify risks from the past that might have a bearing on the present.

There is no doubt that construction has been slower than other industries to adopt new technology, but this isnt just because of some deep-seated reluctance to new ideas.

For example, agriculture has a greater application of machine learning but it is easier for that sector to implement it every year the task for getting in the crops on a farm will be broadly similar.

New challenges

As John Downey, director of sales EMEA, Topcon Positioning Group, explains: With construction theres a slower adoption process because no two projects or indeed construction sites are the same, so the technology is always confronted with new challenges.

Downey adds that as machine learning develops it will work best with repetitive tasks like excavation, paving or milling but thinks that the potential goes beyond this.

As we move forward and AI continues to advance, well begin to apply it across all aspects of construction projects.

The potential applications are countless, and the enhanced efficiency, improved workflows and accelerated rate of industry it will bring are all within reach.

Automated construction equipment needs operators to oversee them as this sector develops it could be one person for every three or five machines, or more, it is currently unclear. With construction facing a skills shortage this is an exciting avenue. There is also AI which helps contractors to better plan, execute and monitor projects you dont need to have machine learning type intelligence to see the potential transformational benefits of this when multi-billion dollar projects are being planned and implemented

Read the rest here:
Machine learning - it's all about the data - KHL Group

Read More..

Need a Hypothesis? This A.I. Has One – The New York Times

They found that the top 10 sets of attitudes linked to having strict ethical beliefs included views on religion, views about crime and confidence in political leadership. Two of those 10 stood out, the authors wrote: the belief that humanity has a bright future was associated with a strong ethical code, and the belief that humanity has a bleak future was associated with a looser one.

We wanted something we could manipulate, in a study, and that applied to the situation were in right now what does humanitys future look like? Dr. Savani said.

In a subsequent study of some 300 U.S. residents, conducted online, half of the participants were asked to read a relatively dire but accurate accounting of how the pandemic was proceeding: China had contained it, but not without severe measures and some luck; the northeastern U.S. had also contained it, but a second wave was underway and might be worse, and so on.

This group, after its reading assignment, was more likely to justify violations of Covid-19 etiquette, like hoarding groceries or going maskless, than the other participants, who had read an upbeat and equally accurate pandemic tale: China and other nations had contained outbreaks entirely, vaccines are on the way, and lockdowns and other measures have worked well.

In the context of the Covid-19 pandemic, the authors concluded, our findings suggest that if we want people to act in an ethical manner, we should give people reasons to be optimistic about the future of the epidemic through government and mass-media messaging, emphasizing the positives.

Thats far easier said than done. No psychology paper is going to drive national policies, at least not without replication and more evidence, outside experts said. But a natural test of the idea may be unfolding: Based on preliminary data, two vaccines now in development are around 95 percent effective, scientists reported this month. Will that optimistic news spur more-responsible behavior?

Our findings would suggest that people are likely to be more ethical in their day-to-day lives, like wearing masks, with the news of all the vaccines, Dr. Savani said in an email.

More here:
Need a Hypothesis? This A.I. Has One - The New York Times

Read More..

Artificial Intelligence and Machine Learning Together To Reach the Culmination of Growth By 2023 – thepolicytimes.com

Artificial Intelligence (AI) is the technology that enables a machine to stimulate human behavior. It is one of the trending technologies and machine learning is its main subset. AI system completely deals with structured and unstructured data. Machine Learning (ML) is a subset of Artificial Intelligence and it explores the development of algorithms that learn from the given data. These kinds of algorithms are able to learn from the given data and teach themselves to adapt to new circumstances and perform certain tasks.

The Big Data augmenting the intelligence in machines

In many areas of research and industry, ML and AI are becoming dominant problem-solving techniques. A similar fundamental hypothesis is shared by both ML and AI; and computation is a better way to model intelligent behavior in machines. Computation does not reinforce learning methods and also does not search for probabilistic techniques. Big data is no fad. As the world is growing at an exponential rate, the size of the data collected across the globe is also growing. Data is becoming contextually relevant which is breaking new ground for ML and AI.

The need for AI and ML

Data is the lifeblood of all businesses. AI automates repetitive learning and analyzes more and deeper data using neural networks that have many hidden layers. In summary, the goal of AI is to create technology that allows machines to function in an intelligent manner. The difference between keeping up with the competition and falling further behind is actually been increased at a high scale by data-driven decisions. So, Machine learning can play a great role to unlock the value of customer data and also enact decisions that keep a company ahead of the competition.

Also read: Artificial Intelligence; Why Journalism Needs to be Human- Centric?

The balancing skills between AI and ML

As stated by Terry Simpson, technical evangelist at Nintex, the skill sets between AI and ML vary at an extreme level. On one hand, there is the technical developer who can execute a given task after been taking the desired outcome, and on the other hand, there is the business analyst who needs to point out that what the business actually needs and see the vision to automate it. Even more, organizations are starting to understand the ways that how AI and ML can have a positive strategic impact.

The PolicyTimes suggestions

Also read: Artificial Intelligence to conquer the world in 50 years

Summary

Article Name

Artificial Intelligence and Machine Learning Together To Reach the Culmination of Growth By 2023

Description

In the list of trending technologies, AI and ML are growing at a rapid rate. The implementations of artificial intelligence are holistic, so they are relying heavily on machine learning to learn patterns from vast data sets.

Author

TPT Bureau

Publisher Name

THE POLICY TIMES

Publisher Logo

More here:
Artificial Intelligence and Machine Learning Together To Reach the Culmination of Growth By 2023 - thepolicytimes.com

Read More..

Machine Learning-Based Risk Assessment for Cancer Therapy-Related Cardiac Dysfunction in 4300 Longitudinal Oncology Patients – DocWire News

This article was originally published here

J Am Heart Assoc. 2020 Nov 26:e019628. doi: 10.1161/JAHA.120.019628. Online ahead of print.

ABSTRACT

Background The growing awareness of cardiovascular toxicity from cancer therapies has led to the emerging field of cardio-oncology, which centers on preventing, detecting, and treating patients with cardiac dysfunction before, during, or after cancer treatment. Early detection and prevention of cancer therapy-related cardiac dysfunction (CTRCD) play important roles in precision cardio-oncology. Methods and Results This retrospective study included 4309 cancer patients between 1997 and 2018 whose laboratory tests and cardiovascular echocardiographic variables were collected from the Cleveland Clinic institutional electronic medical record database (Epic Systems). Among these patients, 1560 (36%) were diagnosed with at least 1 type of CTRCD, and 838 (19%) developed CTRCD after cancer therapy (de novo). We posited that machine learning algorithms can be implemented to predict CTRCDs in cancer patients according to clinically relevant variables. Classification models were trained and evaluated for 6 types of cardiovascular outcomes, including coronary artery disease (area under the receiver operating characteristic curve [AUROC], 0.821; 95% CI, 0.815-0.826), atrial fibrillation (AUROC, 0.787; 95% CI, 0.782-0.792), heart failure (AUROC, 0.882; 95% CI, 0.878-0.887), stroke (AUROC, 0.660; 95% CI, 0.650-0.670), myocardial infarction (AUROC, 0.807; 95% CI, 0.799-0.816), and de novo CTRCD (AUROC, 0.802; 95% CI, 0.797-0.807). Model generalizability was further confirmed using time-split data. Model inspection revealed several clinically relevant variables significantly associated with CTRCDs, including age, hypertension, glucose levels, left ventricular ejection fraction, creatinine, and aspartate aminotransferase levels. Conclusions This study suggests that machine learning approaches offer powerful tools for cardiac risk stratification in oncology patients by utilizing large-scale, longitudinal patient data from healthcare systems.

PMID:33241727 | DOI:10.1161/JAHA.120.019628

Excerpt from:
Machine Learning-Based Risk Assessment for Cancer Therapy-Related Cardiac Dysfunction in 4300 Longitudinal Oncology Patients - DocWire News

Read More..

Metal Geochemistry Meets Machine Learning in the North Atlantic – Hydro International

Surveying the seabed is still an enormous task. So far, only 20% of the regions under water have been mapped with echosounders. This refers only to the topography, not to the content; that is, the composition of the seafloor.

The existing sampling efforts are virtually just tiny pinpricks in the vast amount of uncertainty that has so far covered the seafloor, says Dr Timm Schning from the Deep-Sea Monitoring group of GEOMAR, who led an iAtlantic expedition aboard the German research vessel Maria S. Merian in autumn 2020. Over a period of four weeks, a team of geochemists and data scientists explored the seafloor of the North Atlantic using an innovative combination of mapping, direct sampling and novel data analysis methods.

The researchers had chosen two work areas: the Porcupine Abyssal Plain off Ireland, and the Iberian Abyssal Plain between the Portuguese mainland and the Azores. Different measuring methods were used. The seafloor was mapped regionally with the shipboard multibeam echosounder on the Merian research vessel. A towed camera system provided additional photos of the seafloor at selected positions, which will then be combined to create local, high-resolution maps. A TV-Multicorer was used selectively, with which several samples of the uppermost seafloor sediment layers are collected simultaneously.

The team aboard RV Maria S. Merian prepares to retrieve sediment samples from the multicorer. (Image courtesy T. Schning)

In this way, we not only obtained more data on the seafloor structure itself, but also on its composition at particularly interesting points, says Dr Schning. Using new data analysis methods, we eventually intend to extrapolate the results of the sample analyses to local photo maps. In turn, the findings from the photo mosaic maps will be extrapolated to the regions covered by the echosounder mapping by means of machine learning.

Overall, the trip was very successful for the team. In addition, they were able to assist international colleagues by salvaging an instrument belonging to the UKs National Oceanography Centre: during a storm offshore Ireland, a large measuring buoy from the Porcupine Abyssal Plain Observatory had broken loose from its mooring, which was recovered by the Merian and brought back to Emden, Germany. It will be returned to the UK by land much to the relief and gratitude of its owners.

Now the team is busy publishing all acquired digital data according to FAIR standards, and all data will be made available to the international research community.

You can read the expedition blog at http://www.oceanblogs.org/msm96/.

Rescue mission: successful recovery of the UK's PAP mooring buoy onto the back deck of the Maria S. Merian.

Continue reading here:
Metal Geochemistry Meets Machine Learning in the North Atlantic - Hydro International

Read More..

Postdoctoral Research Associate in Computer Vision and Machine Learning job with DURHAM UNIVERSITY | 235683 – Times Higher Education (THE)

Department of Computer Science

Grade 7:-33,797 - 35,845Fixed Term-Full TimeContract Duration:24 monthsContracted Hours per Week:35Closing Date:28-Dec-2020, 7:59:00 AM

Durham University

Durham University is one of the world's top universities with strengths across the Arts and Humanities, Sciences and Social Sciences. We are home to some of the most talented scholars and researchers from around the world who are tackling global issues and making a difference to people's lives.

The University sits in a beautiful historic city where it shares ownership of a UNESCO World Heritage Site with Durham Cathedral, the greatest Romanesque building in Western Europe. A collegiate University, Durham recruits outstanding students from across the world and offers an unmatched wider student experience.

Less than 3 hours north of London, and an hour and a half south of Edinburgh, County Durham is a region steeped in history and natural beauty. The Durham Dales, including the North Pennines Area of Outstanding Natural Beauty, are home to breathtaking scenery and attractions. Durham offers an excellent choice of city, suburban and rural residential locations. The University provides a range of benefits including pension and childcare benefits and the Universitys Relocation Manager can assist with potential schooling requirements.

Durham University seeks to promote and maintain an inclusive and supportive environment for work and study that assists all members of our University community to reach their full potential. Diversity brings strength and we welcome applications from across the international, national and regional communities that we work with and serve.

The Department

The Department of Computer Science is rapidly expanding it will more than double in size over the next 10 years from 18 to approximately 40 staff. A new building for the department (joint with Mathematical Sciences) will be built to house the expanded Department, and is expected to be completed in 2021. The current Department has research strengths in (1) algorithms and complexity, (2) computer vision, imaging, and visualisation and (3) high-performance computing, cloud computing, and simulation. We work closely with industry and government departments.

Research-led teaching is a key strength of the Department, which came 5th in the Complete University Guide. The department offers BSc and MEng undergraduate degrees and is currently redeveloping its interdisciplinary taught postgraduate degrees. The size of its student cohort has more than trebled in the past five years. The Department has an exceptionally strong External Advisory Board that provides strategic support for developing research and education, consisting of high-profile industrialists and academics.

Computer Science is one of the very best UK Computer Science Departments with an outstanding reputation for excellence in teaching, research and employability of our students.

The Role

We are seeking a full-time Postdoctoral Research Associate (PDRA) to join Prof. Toby Breckon's research team at Durham University. The post is funded, for an initial fixed-term period of 24months, by an ongoing portfolio of research work primarily spanning aspects of automatic object detection and classification for wide-area visual surveillance (in collaboration with a large industrial partner) in addition to use in aviation security (in collaboration with UK and US government) and sensing for future autonomous vehicles (in collaboration with a number of industrial collaborators).

The researcher will have the opportunity to work on common themes of machine learning research with applications across several funded work streams within the group. They will consider the use of cutting-edge deep learning algorithms for image classification and generalized data understanding tasks (object detection, human pose and behaviour understanding, and materials discrimination), in addition to integrated aspects of visual tracking and stereo vision across a range of image modalities. Specifically, they will investigate novel aspects of automatic adaptability of contemporary machine learning approaches as an aspect of these tasks. They will develop software algorithms, manage their own academic research in addition to project delivery to a range of external industrial and government collaborators.

In addition to published research output, the candidate can expect their research to have significant impact across a range of industrial/governmental collaborators and form a major innovation contributor to future visual surveillance and vehicle autonomy applications.

The post the offers an outstanding opportunity to gain a strong research track record in an exciting and fast-moving area of applied computer vision and machine learning whilst working in an environment with high levels of external collaboration and industrial research impact.

Further details on the research portfolio can be found on the following website:

Prof. Toby Breckon, publications and demos:https://www.durham.ac.uk/toby.breckon/

Responsibilities:

ThisPostdoctoral Research Associate (PDRA) post at Durham University requires an enthusiastic researcher with expertise in the development of computer vision, image processing and/or machine learning techniques.The project work with external collaborators requires someone who can develop robust, well-documented code efficiently and have an ability to work with exotic sensing hardware as required. Researchers lacking evidence of code development in a delivery environment, or strong potential to work as part of a multidisciplinary team spanning multiple organisations are unlikely to be successful.It is fixed term for 24 months due to external funding.

While the post is based for the full period in Durham, it will be necessary for the researcher to travel for meetings and/or system trials as part of the project. There will also be the opportunity for the researcher to attend national and international conferences to present the work, and there will be opportunities to gain experience of teaching at undergraduate level. The researcher will join the Innovative Computing Research Group within the Department.

The post-holder is employed to work on research/a research project which will be led by another colleague. Whilst this means that the post-holder will not be carrying out independent research in his/her own right, the expectation is that they will contribute to the advancement of the project, through the development of their own research ideas/adaptation and development of research protocols.

Successful applicants will, ideally, be in post byJanuary 2021.

How to Apply

For informal enquiries please contactProf. Toby Breckon,toby.breckon@durham.ac.uk.All enquiries will be treated in the strictest confidence.

We prefer to receive applications online via the Durham University Vacancies Site.https://www.dur.ac.uk/jobs/. As part of the application process, you should provide details of 3 (preferably academic/research) referees and the details of your current line manager so that we may seek an employment reference.

Applications are particularly welcome from women and black and minority ethnic candidates, who are under-represented in academic posts in theUniversity.

What to Submit

All applicants are asked to submit:

Next Steps

The assessment for the post will includeformal interview and a presentation of recent research results.Shortlisted candidates will be invited for interview and assessment (Date TBC)

The Requirements

Essential:

Qualifications

Experience

Skills

Desirable:

Experience

Skills

DBS Requirement:Not Applicable.

Read the original post:
Postdoctoral Research Associate in Computer Vision and Machine Learning job with DURHAM UNIVERSITY | 235683 - Times Higher Education (THE)

Read More..

Tracking H1N1pdm09, the Hantavirus, and G4 EA H1N1 w/ Data Mining – hackernoon.com

After the start of this whole Covid19 pandemic, worries of other viruses have been making rounds. These worries have ranged from variations of the Hantavirus showing up in China, to newly reoccurring worries of H1N1 strains. While people aren't too worried about widespread animal to human and human to human transmission, the same was thought about in regards to Covid19. While it is less likely these viruses are worth a concern, their data at least to some regard is worth exploring.

My certainty revolves around the fact that I believe within the next ten years, we will likely see a virus of similar magnitude or cause of concern as Covid19. This is just a guess given how Covid followed up H1N1. How people react should be better prepared as opposed to this time. I hope I am wrong on this "next ten years" prediction and that people go out of there way to annihilate these concerns.

This did however inspire me to look at the data of these viruses from some sources. I didn't go on that much of a data spree and wasn't as detailed as my previous coding challenges given I wanted to make this post rather simple and compact. Also, I have a limited time schedule in terms of models I want to build.

The above model is on antibody responses in mice due to H7N9 and H1N1pdm09 vaccines. The data source can be seen here, and was part of an open access research article on PLOS ONE. The researchers involved were part of Baxter BioScience in Austria. I'm just visualizing said data in a meaningful manner, and the same applies to all other source data related visuals in this article.

The above model was related to deferentially expressed proteins in A549 cells related to H7N9 and H1N1pdm09 viral strains. The data set was also made available through PLOS ONE, and the study was due to a grant by the Shenzhen Science and Technology Innovation Project.

The third data set for the above model was also data on PLOS ONE. The data was published by the PLOS ONE staff and related to "Characteristics, treatment and outcome of Influenza A(H1N1)pdm09-infected CF patients". The above visual is related to infections, and the data shown is raw.

This above model is on Hantavirus host assemblages in the Atlantic Forest, and is based on a data set that is also on PLOS in the Journal of Neglected Tropical Diseases. The author summary of the study can be seen here.

The above model is on a data set in regards to the pathology of Hantavirus in bats and insectivores in China by species and location. That data was published on PLOS PATHOGENS, the author summary of the study can be seen here. This data model have been visualized from its raw data format.

The above model is on a data set published on Dyrad. It is in regards to swine in Mexico, and origins related to the 2009 H1N1 influenza in regards to that swine. The work is in the public domain, and the authors are listed at the top here. The researchers come from various backgrounds and institutions, including the: Icahn School of Medicine at Mount Sinai, National Institutes of Health, Laboratorio Avi-Mex, KU Leuven, and the University of Edinburgh.

This above data model is based on a UK study for antigenic reactivity in the 2009 H1N1 pandemic. The data set is part of a research article on PLOS ONE. The authors summary can be seen here, and they are part of the "Centre for Infections, Health Protection Agency, London, United Kingdom". The data have been visualized in its raw data format.

Visualizing a bunch of data seems like such a basic project, and that is because it is. This is in no way as complex as the things I have done before or algorithmic pipelines I have built. The question is then, "why do this?". The answer is simple. Sometimes it is best to do things for the purpose of simplicity, looking at what is out there and drawing conclusions. Not everything needs to be spectacularly complex, and this is even true with data.

The question is, what is next? What does data sets like these, and the visualizations inspire me to do? I have a variety of options. I have considered utilizing my decentralized-internet SDK for building grid computing virology pipelines and networks. I also have considered trying to garnish my own data or working with companies who have sequenced data that has been corrupted. The world in terms of this complexity problem, is my oyster.

The question isn't what one can do, but also what one can prevent? More and more extensive data sets from a variety of researchers will allow for predictability models, as well as possible references people can use in regards to lowering economic disasters if such spreads happen again. Whether H1N1, Covid19, or some upcoming pandemic, people are still doing similar mistakes as before. The issue should be based off of formalities, data, and common sense. It doesn't need to be overly politicized or financially milked the way it usually has.

Create your free account to unlock your custom reading experience.

Read the original post:

Tracking H1N1pdm09, the Hantavirus, and G4 EA H1N1 w/ Data Mining - hackernoon.com

Read More..