Category Archives: Data Mining

Proof-of-Work on Blockchain Explained – LCX – LCX

The Significance of Proof-of-Work

Proof-of-work serves multiple essential purposes within the blockchain ecosystem. Firstly, it ensures the distributed consensus required for validating transactions and maintaining a single version of the truth across the network. Secondly, it acts as a deterrent against malicious actors attempting to manipulate the system by requiring significant computational resources and energy expenditure. Lastly, PoW serves as an incentive mechanism, rewarding miners with newly minted cryptocurrency tokens for their computational efforts.

Transaction Validation:

When a user initiates a transaction on the blockchain, it gets broadcast to all nodes within the network. Miners collect these transactions and group them into blocks. Before adding a block to the chain, miners need to validate the transactions within it.

Hashing:

Miners utilize cryptographic hash functions, such as SHA-256 (used in Bitcoin), to create a unique digital fingerprint of the blocks data, including the transactions and a reference to the previous block. The output of this hashing process is called a hash.

Mining Difficulty:

To control the rate at which new blocks are added to the blockchain and maintain consistency, the network adjusts the mining difficulty periodically. The difficulty is determined by the target value set for the hash. Miners must find a hash value that meets this target, which is typically achieved by manipulating a value called the nonce.

Finding the Nonce:

Miners iteratively change the nonce value in the blocks header until they find a hash that meets the difficulty target. Since the hash function is deterministic, miners need to perform numerous computations (hash attempts) by varying the nonce until they discover a valid hash.

Proof-of-Work:

The miner who successfully finds a valid hash, which meets the required difficulty level, broadcasts it to the network. Other participants can easily verify the validity of the hash by applying the same hash function and comparing the result to the target.

Block Addition and Rewards:

Once a valid hash is found, the miner adds the block to the blockchain, including the hash of the previous block, and propagates it throughout the network. As a reward for their efforts, the successful miner receives a predetermined amount of cryptocurrency tokens, often along with transaction fees associated with the transactions in the block.

The Proof-of-Work consensus mechanism has the following problems:

The 51% risk: If a controlling entity possesses 51% or more of network nodes, it can corrupt the blockchain by gaining control of the majority of the network.

Time-consuming: To discover the correct solution to the puzzle that must be solved to mine the block, miners must examine numerous nonce values, which is a time-consuming process.

Resource consumption: In order to solve the difficult mathematical puzzle, miners use a substantial amount of computing capacity. It wastes valuable resources (money, energy, space, equipment). By the end of 2028, it is anticipated that 0.3% of the worlds electricity will be used to verify transactions.

Not instantaneous transaction: Confirmation of a transaction typically takes 10 to 60 minutes. Because it requires time to mine the transaction and add it to the blockchain, thus committing the transaction, the transaction is not instantaneous.

Proof-of-work is a robust consensus algorithm that has revolutionized the world of cryptocurrencies by providing a secure and decentralized system. By employing computational work, PoW ensures the integrity of transactions and prevents malicious activities within the blockchain network. While it has been successful in many cryptocurrencies, the increasing energy consumption associated with PoW has raised concerns about its long-term sustainability. However, ongoing research and the development of alternative consensus algorithms continue to explore more energy-efficient and environmentally friendly options for securing blockchain networks.

More:

Proof-of-Work on Blockchain Explained - LCX - LCX

Reducing disaster risk for the poor in tomorrow’s cities with … – Nature.com

Computational science, including numerical simulation through high-performance computing, data analytics and visualization, can underpin the SDGs. This is especially true when deployed in collaboration with other scientific domains and as part of co-produced knowledge-generation processes with a range of urban stakeholders and end users. By acknowledging the systemic nature of the causes of disasters, such research must facilitate the inclusive engagement of scientists, engineers, policy-makers, economists, private sector groups and, critically, representatives of the citizens who will live in the cities experiencing rapid growth. The SDGs recognize that urban development plans made today will either brighten or blight the lives of citizens for centuries.

A three-part agenda for interdisciplinary science marks out how computational science can be used to underpin and catalyze this ambition. Each step in the agenda can stand alone or together form a structured process from better understanding to better action to reduce disaster risk in future cities.

Digitally capturing inclusive future visions

Many social science methodologies are available with which to solicit preferences for neighborhood or city-wide futures. A challenging task is enabling such methodologies to capture the subjective visions of the future of diverse urban stakeholders. Only by doing this can future cities disrupt established norms and consider what a safer city of the future looks like from different or multiple perspectives. The outputs of such methodologies are difficult to translate into policy options; they are often qualitative and can appear imprecise to policy-makers. However, such qualitative information has a huge potential to act as a basis for future urban scenario development if it can be assimilated into precise digital representations. Computational science can help here. Spatial components of the projections (such as desired land-use zones and their attributes) can be translated into land-use plans using geographic information systems and related computational tools. This information can be complemented with predicted patterns of urban growth, determined using machine learning algorithms that rely on remote sensing data. Spatial priorities emerging from stakeholder groups are thus rendered into high-resolution digital representations of possible urban futures8.

Such digital future cities also incorporate detailed attributes of people and assets. These include engineering characteristics for each building and infrastructure component and system, information on socio-demographics for each individual and household, and data on socio-physical interdependencies (for instance, where each person goes to work and where each child goes to school). The virtual representation is achieved using several computational models, including synthetic population-generation algorithms, human mobility methods, procedural modelling and optimization processes8,9.

Exposing digital futures to likely hazard events

The high-resolution virtual representations of possible future urban developments must be exposed to hazard events that are consistent with the hydrological and geophysical environment of the city. A series of hazard events can be selected to cover possible life-cycle experiences of the development. A key effort in the Tomorrows Cities project, for example, has been to code these events into high-resolution, physics-based simulations10 (Fig. 1), taking advantage of the latest developments in high-performance computing. A custom-developed web-based application merges site-specific intensity data from the hazard event with exposure and vulnerability information in the future development scenario to compute the likely impact of any particular event. These calculations use several underlying computational tools, including high-resolution, multi-hazard fragility models developed from detailed building-level numerical performance assessments11 and data-mining models that can distinguish the magnitude of disaster impacts on the basis of social vulnerability indicators12,13. Agent-based modelling is another powerful approach in the field of disaster simulation that allows researchers to simulate the dynamic behavior of individual entities (agents), with their socio-economic features, within a complex system.

Disaster reduction programs often depend on externally developed solutions imposed on specific local challenges. Computational science provides digital tools that can support innovative capacity strengthening, freeing possible futures thinking from the responsibility for real lives and encouraging experimentation with innovative planning solutions. a,b, Here, the virtual city of Tomorrowville is shaken by a virtual earthquake (a) and flooded by a virtual extreme rainfall event triggered by climate change (b), exposing spatial variability in exposure and driving reconsideration of spatially uniform building regulations.

Depending on the particular scenario, multiple impact metrics reflecting diverse aspects of the lived experience (for instance, number of deaths, number of displacements, number of injuries, hospital occupancy, lost days of production or school and total replacement costs) can be calculated and mapped, providing a detailed picture of the total impact of any disaster event resulting from the decisions and policies that generated the specific digital future being tested. Each of these metrics can be disaggregated in different ways, including by age, gender, income or any other attribute contained in the demographic dataset of the virtual future representation, providing an understanding of the consequences of the decisions made during planning and scenario building.

To complete the picture of the root causes of disaster impact for policy-makers, quantifying and mitigating social vulnerability (the susceptibility of an individual from a given group to the impacts of hazards) can help to build resilience to multiple types of hazard shock. So far, there is a dearth of disaggregated data recording disaster impacts and social vulnerability measures simultaneously, and the current priority is to collect longer data series. These might emerge, for example, from satellite remote sensing; computational methods in unsupervised learning and data clustering as well as deep learning (for instance, neural networks) could then be leveraged to refine quantitative modelling of social vulnerability. Exploring nonlinear and multi-scalar relations between exposure, vulnerability and disaster impacts is an important research ambition14.

Convening risk agreement and institutional learning

Impact is objective, but risk depends on personal or group priorities; the value of property replacement, for example, has a different priority depending on whether or not you own property. Computational science supports interactive representations of complex urban impact scenarios, facilitating the quantification of subjective risk priorities by generating impact-weighting matrices that include the voice of marginalized groups in the local definition of disaster risk. Equipped with weighted risk definitions, attention turns to exposing the root causes of such risk in the choices and decisions behind any development plan. Dynamic digital visualizations of the impact metrics produced by simulation-based tools could help to elucidate the distribution of risk inherent in development planning and to diagnose risk drivers, inverting complex causal chains and exposing the underlying flaws in decision-making. In the case of the Tomorrows Cities Hub, this is communicated to stakeholders through the web-based application. More formal inversions uncovering root causes from impact metrics are needed to clarify the diagnosis and reinforce evidence-based decision-making for risk reduction.

Focusing on the origins of risk in the decisions, policies and assumptions underpinning future development scenarios allows stakeholders to examine their choices and reflect on broader governance questions. Modifications to particular stakeholder priorities that are likely to lead to reduced risk are implemented in the digital development scenarios. These are then subjected to the same simulated hazard events to test the resulting risk reduction. The process is iterated, optimizing the future for lower risk, elucidating the effectiveness of governance processes and supporting evidence-based decision-making.

Original post:

Reducing disaster risk for the poor in tomorrow's cities with ... - Nature.com

The Bumpy Road Toward Global AI Governance – NOEMA – Noema Magazine

Credits

Miranda Gabbott is a writer based in Barcelona. She studied art history at Cambridge University.

Just about two and a half years ago, artificial intelligence researchers from Peking University in Beijing, the Beijing Academy of Artificial Intelligence and the University of Cambridge released a fairly remarkable paper about cross-cultural cooperation on AI ethics that received surprisingly little attention beyond the insular world of academics who follow such things. Coming to a global agreement on how to regulate AI, the paper argues, is not just urgently necessary, but notably achievable.

Commentaries on the barriers to global collaboration on AI governance often foreground tensions and follow the assumption that Eastern and Western philosophical traditions are fundamentally in conflict.The paper, alsopublished in Chinese, takes the unconventional stance that many of these barriers may be shallower than they appear. There is reason to be optimistic, according to the authors, since misunderstandings between cultures and regions play a more important role in undermining cross-cultural trust, relative to fundamental disagreements, than is often supposed.

The narrative of a U.S.-China AI arms race sounded jingoistic and paranoid just a few years ago. Today, it is becoming institutionalized and borne out in policy in both countries, even as there has been growing recognition among researchers, entrepreneurs, policymakers and the wider public that this unpredictable, fast-growing and multiuse set of technologies needs to be regulated and that any effective attempt to do so would necessarily be global in scope.

So far, a range of public bodies, civil society organizations and industry groups have come forward with regulatory frameworks that they hope the whole world might agree on. Some gained traction but none have created anything like an enforceable global settlement. It seems possible that rivalry and suspicion between two great powers and their allies could derail any attempt at consensus.

Possible but not inevitable.

Getting policymakers from China and the U.S. around a table together is just the largest of many hurdles to a global agreement. Europe is likely to play a decisive role in shaping discussions. Though an ideological ally of the U.S., there are significant ideological differences between the U.S. and the EU on strategic aims regarding AI regulation, the former prioritizing innovation and the latter risk minimization.

More complex still, any global settlement on AI regulation that genuinely aspires to mitigate the negative consequences of this new technology must account for perspectives from regions often underrepresented in global discussions, including Africa, the Caribbean and Latin America. After all, it is overwhelmingly likely that the Global South will shoulder the brunt of the downsides that come with the age of AI, from the exploitative labeling jobs needed to train LLMs to extractive data mining practices.

Despite a thaw in the rivalry between Washington and Beijing remaining a distant prospect, there are still opportunities for dialogue, both at multilateral organizations and within epistemic communities.

A global settlement on AI ethics principles has clear advantages for all, since the effects of a transformational general-use technology will bleed across national and geographical boundaries. It is too far-reaching a tool to be governed on a nation-by-nation basis. Without coordination, we face a splinternet effect, wherein states develop and protect their technological systems to be incompatible with or hostile to others.

There are immediate dangers of technologists seeking an advantage by releasing new applications without pausing over ethical implications or safety concerns, including in high-risk fields such as nuclear, neuro and biotechnologies. We also face an arms race in the literal sense, with the development of military applications justified by great-power competitions: The principle of If theyre doing it, weve got to do it first.

With stakes this high, there is superficially at least widespread goodwill to find common ground. Most national strategies claim an ambition to work together on a global consensus for AI governance, including policy documents from the U.S. and China. A paper released by the Chinese government last November called for an international agreement on AI ethics and governance frameworks, while fully respecting the principles and practices of different countries AI governance, and one of the strategic pillars of a Biden administration AI research, development and strategy plan is international collaboration.

There are some prime opportunities to collaborate coming up this year and next, like the numerous AI projects under the U.N.s leadership and next years G7, which Giorgia Meloni, the Italian prime minister and host, suggested wouldfocuson international regulations of artificial intelligence. This July, the U.N. Security Council held its first meeting dedicated to the diplomatic implications of AI, where Secretary-General Antnio Guterres reiterated the need for a global watchdog something akin to what the International Atomic Energy Agency does for nuclear technology.

Yet the disruptive influence of fraught relations over everything from the war in Ukraine to trade in advanced technologies and materials show no sign of ending. U.S. politicians frequently and explicitly cite Chinese technological advancements as a national threat. In a meeting with Secretary of State Antony Blinken this June, top Chinese diplomat Wang Yi blamed Washingtons wrong perception of China as the root of their current tensions and demanded the U.S. stop suppressing Chinas technological development.

Which is why the first of four arguments from Sen higeartaigh, Jess Whittlestone, Yang Liu, Yi Zeng and Zhe Liu that these problems are surmountable and a near-term settlement on international AI law is achievable is so important. In times of geopolitical tension, academics can often go where politicians cant. There are precedents for epistemic communities from feuding nations agreeing on shared solutions to mitigate global risks. You can look back at the Pugwash Conference series during the Cold War, higeartaigh told me. There were U.S. and U.S.S.R. scientists sharing perspectives all the way through, even when trust and cooperation at a government level seemed very far away.

Differences in ideas about governing ethics across cultural and national boundaries are far from insurmountable.

There is evidence that Chinese and U.S. academics working on AI today are keen to cooperate. According to Stanford Universitys 2022AI index report, AI researchers from both countries teamed up on far more published articles than collaborators between any other two nations, though such collaborations have decreased as geopolitical tension between the two countries has increased. Such efforts, meanwhile, took place even amid threats to the lives and livelihoods of Chinese researchers living or visiting the U.S. in 2018, the Trump administration seriously debated a full ban on student visas for Chinese nationals, and in 2021, according to a survey of nearly 2,000 scientists, more than 42% of those of Chinese descent who were based in the U.S. reported feeling racially profiled by the U.S. government.

Although technology occupies a different place in Chinese society, where censorship has dominated since the early days, than in the U.S., which is still somewhat aligned with Californian libertarians and techno-utopians, higeartaigh and his colleagues second argument is that a these differences arent so great that no values are held in common at all.

Western perceptions of the internet in China are frequently inaccurate, which can make invisible certain points of common ground. Take, for instance, the issue of data privacy. Many in the West assume that the Chinese state, hungry to monitor its citizens, allows corporations free reign to harvest users information as they please. But according to Chinas Artificial Intelligence Industry Alliance (AIIA), a pseudo-official organization that includes top tech firms and research organizations, AI should adhere to the principles of legality, legitimacy and necessity when collecting and using personal information, as well as strengthen technical methods, ensure data security and be on guard against risks such as data leaks. In 2019, the Chinese government reportedly banned over 100 apps for user data privacy infringements.

In the U.S., meanwhile, policies on data privacy are amessof disparate rules and regulations. There is no federal law on privacy that governs data of all types, and much of the data companies collect on civilians isnt regulated in any way. Only a small handful of states have comprehensive data protection laws.

This brings us to the third reason why a global settlement on AI regulation remains possible. Given the complexities of governing a multi-use technology, AI governance frameworks lean toward philosophical concepts, with similar themes emerging time and again human dignity, privacy, explainability. These are themes that both countries share.

As Chinas AIIA puts it: The development of artificial intelligence should ensure fairness and justice and avoid placing disadvantaged people in an even more unfavorable position. And the White Houses draft AI Bill of Rights reads, in part, that those creating and deploying AI systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.

This is not to say that incompatibilities genuinely rooted in divergent philosophical traditions can be wished away, nor that shallow accords are any foundation for lasting agreements. Rather, the point is that there is often scope to agree on specific statements, even while arriving at them from different places and perhaps even while disagreeing on abstract principles.

Here again, academia has a valuable role to play. Scholars are working to understand how different ethical traditions shape AI governance and uncover areas where consensus can exist without curtailing culturally divergent views. Sarah Bosscha, a researcher who studies how European and Chinese AI legislation differs, told me that with respect to the EU, the greatest point of divergence is the absence of a parallel to the Confucian value of harmony often interpreted as the moral obligation of an individual to the flourishing of their community. In China, following norms derived from Confucius, a person is not primarily an individual, but a family member, part of a social unit. This order of prioritization may clearly come into conflict with the supremacy in Europe (and even more so in America) of the individual.

But as Joseph Chan at the University of Hong Kong has argued, these are not mutually exclusive values. Chinese Confucianism, by his reading, can support multiple context-independent points of human rights. And the Universal Declaration of Human Rights contains collectivist aspects that contain similar meanings as the Confucian value of harmony: Human beings should act towards one another in a spirit of brotherhood (Article 1) and have duties to the community (Article 29).

This overlap is borne out in policy documents, with a 2019 EU document outline principles that emphasize community relations and contain a section on nondiscrimination against minorities. According to Bosscha, the European Union would do well to name harmony in its regulations and acknowledge its own investment in this value.

The Beijing AI Principles (2019), meanwhile, echo the language of human rights law, stating that human privacy, dignity, freedom and rights should be sufficiently respected. Though of course, Chinas deployment of AI and surveillance technologies against minorities reveals this commitment is far from full implementation.

A fourth line of reasoning in the paper by higeartaigh and his colleagues is that a noteworthy amount of the mistrust between the West and East is due to a rich history of misunderstandings. This is due, at least in part, to an asymmetrical language barrier. Scholars and journalists in China often have a strong command of English, the lingua franca of Western academia, and can access the work of their counterparts. Meanwhile, those working in the West rarely master Chinese languages. As such, knowledge-sharing often only flows one way, with English-speaking scholars and politicians alike almost entirely reliant on translations to access policy documents from China.

Political language is usually nuanced its subtleties rarely translatable in full. This is especially true in China. Translations of relatively ambiguous statements from Beijing on AI law have caused some high-stakes misunderstandings. For example, a2017 ChineseAI development planwas largely interpreted by Western commentators as a statement of intent toward technological domination. This was partly thanks to a translation that was worded as a declaration of China becoming the worlds primary AI innovation center by 2030. However, according to Fu Ying, a former Chinese diplomat, that was a misreading of the intent of the plan. What China wants to achieve, she wrote, is to become a global innovative center, not the only or exclusive center clearly a gentler goal.

But apprehension based on the translation of the Chinese plan reverberated in the American tech community nonetheless. As Eric Schmidt, a former executive chairman of Google parent Alphabet, put it at a summit in 2017: By 2030, they will dominate the industries of AI. Just stop for a sec. The [Chinese] government said that.

There is already an overlap in AI ethics frameworks between the two nations. And debunkable myths can inflate U.S. fears of Chinas technology strategies.

For higeartaigh, the reason global efforts to create shared regulation on AI are so vulnerable to derailment lies in asking who stands to benefit from crystallizing the narrative of a U.S.-China tech race from rhetoric to policy. If there is a race, he told me, its between U.S. tech companies. I am concerned that the perspective of needing to stay ahead of China is used to justify pushing ahead faster than would be ideal.

In his view, many technologists are deliberately amplifying U.S.-China race rhetoric to justify releasing software as fast as possible, cutting corners on safety checks and ethical considerations.

Schmidt is the head of the National Security Commission on Artificial Intelligence and a highly influential proponent of the race against China viewpoint. For years, Schmidt has pushed the Pentagon to procure smarter software and invest in AI research while maintaining a strong preference for technology deregulation. Meanwhile, his venture capital firm has invested in companies that won multimillion-dollar contracts from federal agencies.

According toAI Nows 2023 report, the crux of the problem is that AI products and the businesses behind them are increasingly perceived as national assets. The continued global dominance of Americas Big Tech companies (Google, Apple, Facebook, Amazon and Microsoft) is tied to U.S. economic supremacy. Any attempt to set limits on what those companies can develop or the data they can use risks ceding vital ground to Chinese companies, which are often presumed falsely to operate in a regulatory vacuum.

This argument has proved remarkably influential, particularly with regard to privacy regulations. In 2018, shortly after the Cambridge Analytica scandal, Mark Zuckerberg applied this line of reasoning to warn against strengthening data rights. In particular, he stated at a Senate hearing that implementing certain privacy requirements for facial recognition technology could increase the risk of American companies fall[ing] behind Chinese competitors. Just last year, the executive vice president of the U.S. Chamber of Commerce argued that data privacy guidelines outlined within the AI Bill of Rights that intended to bring the U.S. closer to the EUs GDPR were a bad idea when the U.S. is in a global race in the development and innovation of artificial intelligence. Needless to say, conflating deregulation with a competitive edge against China doesnt bode well for attempts to cooperate with its policymakers to agree on global regulations.

Fortunately, the U.S. government is not entirely batting on behalf of Big Tech. The Biden administration has taken clear steps to enforce competition with anti-trust laws against the wishes of tech monopolists. A 2021 executive order declared that The answer to the rising power of foreign monopolies and cartels is not the tolerance of domestic monopolization, but rather the promotion of competition and innovation by firms small and large, at home and worldwide.

So, despite a thaw in the rivalry between Washington and Beijing remaining a distant prospect, there are still opportunities for dialogue, both at multilateral organizations and within epistemic communities. As academics have shown, differences in ideas about governing ethics across cultural and national boundaries are far from insurmountable. There is already an overlap in AI ethics frameworks between the two nations. But unfortunately, durable myths continue to inflate U.S. fears of Chinas technology strategies.

Though the path to agreeing on a set of global ethical guidelines between rivals may be a bumpy road, theres nothing inevitable about the future direction this technological rivalry will take.

Read the original:

The Bumpy Road Toward Global AI Governance - NOEMA - Noema Magazine

July mining output falls to lowest level in four months – IOL

Mining output in South Africa will struggle to recover for the remainder of the year after unexpectedly slipping in July, falling to its lowest level in four months on the back of intensified power cuts and slow global demand.

Data from Statistics SA (Stats SA) yesterday showed that mining production plunged by 3.6% from a year ago following an upwardly revised 1.3% rise in June, and defying market expectations of a 0.5% increase.

This was the steepest contraction in mining activity since February, with platinum group metals (PGMs), coal and diamonds being the biggest drags on growth.

Stats SAs principal survey statistician, Juan-Pierre Terblanche, said PGMs contracted by 10.4% following robust growth of 11.1% year-on-year in the previous month.

Terblanche said coal also fell by 7%, reflecting a deterioration from the 1.8% decline in June, while diamond production fell for the 10th consecutive month by 33.4%.year-on-year.

Nickel, manganese ore and chromium ore were also weak in the month, Terblanche said.

On the upside, miners in copper, gold and iron ore recorded a positive month. Iron ore reached its highest growth rate with production expanding by 13.8% year-on-year.

On a seasonally adjusted monthly basis, mining production decreased by 1.7% in July, following a downwardly revised 1.2% rise in the previous month.

In the year-to-date January to July period, mining output is down by 1.4% year-on-year, reflecting poor growth within the coal, iron ore and PGMs divisions.

However, output growth performance has been robust at 17.5% in the year-to-date in the gold division and modest in the manganese ore division at 2.9%.

Stats SA said the seasonally adjusted mining output is critical for the official calculation of quarterly GDP growth.

FNB senior economist Thanda Sithole said that this data, along with manufacturing output released on Monday, and electricity production painted a gloomy picture at the start of the third quarter. Sithole said it was also consistent with the general expectation of a GDP growth moderation after a better-than-expected 0.6% quarterly expansion in the second quarter.

Overall, the mining sector remains challenged by domestic load-shedding intensity and logistics constraints, as well as moderating external demand, with growth challenges in China and Europe boding ill for export of critical commodities, Sithole said.

Commodity prices have decreased relative to last year, weighing on earnings and the mining sectors contribution to government tax revenue collection.

In addition to domestic energy and logistical challenges, South Africas mining production has also been affected by the weakening global demand on the back of Chinas economic woes.

Investec economist Lara Hodes said the subdued global environment has weighed heavily on commodity demand, with the World Banks metals and minerals index down around 13% in the year-to-date to end August.

Hodes said the fragile global economic environment, with a slower-than-projected rebound in demand from China, has weighed on diamond sales, while competition from the lab-grown diamond industry persists.

Conversely, gold output has benefited from a sluggish greenback combined with geo-political tensions, which has seen investors seeking safe haven options, Hodes said.

Notwithstanding global factors, domestically the mining sector continues to deal with logistical impediments, while unreliable energy supply remains a primary operational hindrance.

Indeed, these key challenges continue to weigh heavily on SAs competitive position, impeding exports and deterring investment potential.

Read this article:

July mining output falls to lowest level in four months - IOL

Data mining of structured radiology reports yields advantageous … – Health Imaging

Datapoints contained in structured radiology reports can be readily mined to guide decisions around long-term clinical, business and population-health aims, according to a study conducted in Germany and published in Abdominal Radiology [1].

The project examined the approach as applied to a cohort of patients with kidney stones, but the authors suggest their technique is likely generalizable to various other diagnoses.

Among the decisions and planning that such mined data might inform, the team names quality assurance, radiation protection, and scientific and economic investigations.

These possibilities add to the long list of advantages of structured reporting over free-text reporting and underline the necessity of structured reporting usage, the authors write before acknowledging one significant caveat:

Structured reporting in routine practice may be elaborate since it requires radiologists to adapt to reporting templates that have to be filled using a mouse and keyboard rather than a dictation system.

For the study, interventional radiologist Tobias Jorg and colleagues at Johannes Gutenberg University Mainz investigated numerous aspects of kidney stones by mining data from the structured radiology reports of 2,028 patients. All underwent abdominopelvic CT for suspected urolithiasis, a common condition in which stones move from the kidneys to the ureters, the bladder and, finally, the urethra.

Some 72% of the cohort proved positive for urolithiasis, a figure for which the authors credit the astuteness of the referring clinicians. The sex distribution was 2.3 men for each woman, the median age was 50, and the median stone count was one.

More:

Data mining of structured radiology reports yields advantageous ... - Health Imaging

Postdoctoral Research Fellow in Graph Data Mining job with … – Times Higher Education

We are looking for an outstanding early career academic with an excellent track record of research in Graph Data Mining. The successful applicant will be expected to supervise Higher Degree Research students as well as leading and collaborating on research projects in Computer Science. Candidates with relevant research experience in Graph Neural Networks, Graph-Level Learning, Graph Mining and Brain Networks are strongly encouraged to apply.

The successful candidate will demonstrate an excellent track record in research, as well as personal and professional skills that can enable service and leadership contributions that will help to strengthen the impact and reputation of the University.

About Us

You will be joining a growing School of Computing with a fast-rising reputation (top 130, in 2024 QS World University Rankings). The School is currently the home of 42 academic staff and more than 100 research students, and an ever-growing cohort of undergraduates and postgraduate coursework students. The School offers a broad range of cutting edge undergraduate and postgraduate courses and is home to a dedicated computing precinct with state of the art computer and specialised teaching labs.

For more information about the Faculty, please visit https://www.mq.edu.au/faculty-of-science-and-engineering.

How to Apply

To be considered for this position, please apply online by submitting your CV and a separate document responding to the selection criteria below:

Essential Criteria

Salary Package: Level A Step 6 (PhD) from $97,621 - $104,622 p.a., plus 17% employer's superannuation and annual leave loading.

Appointment Type: Full-time, fixed term for 1.5 years.

Enquiries: Dr. Jia Wu, School of Computing at jia.wu@mq.edu.au

See the rest here:

Postdoctoral Research Fellow in Graph Data Mining job with ... - Times Higher Education

Two Key Genes Revealed in Chemotherapy Resistance – Neuroscience News

Summary: Researchers identified two genes NEK2 and INHBA responsible for causing chemotherapy resistance in head and neck cancer patients. Remarkably, when these genes are silenced, previously resistant cancer cells begin responding to chemotherapy.

From a chemical library, the team pinpointed two substances (Sirodesmin A and Carfilzomib) that can target these genes, making resistant cells vastly more receptive to the chemotherapy drug cisplatin.

This groundbreaking study paves the way for more personalized cancer treatments, offering hope to patients resistant to conventional therapies.

Key Facts:

Source: Queen Mary University London

Scientists from Queen Mary University of London have discovered two new genes that cause head and neck cancer patients to be resistant to chemotherapy, and that silencing either gene can make cancer cells previously unresponsive to chemotherapy subsequently respond to it.

The two genes discovered actively work in most human cancer types, meaning the findings could potentially extend to other cancers with elevated levels of the genes.

The researchers also looked through a chemical library, commonly used for drug discovery, and found two substances that could target the two genes specifically and make resistant cancer cells almost 30 times more sensitive to a common chemotherapy drug called cisplatin.

They do this by reducing the levels of the two genes and could be given alongside existing chemotherapy treatment such as cisplatin. One of these substances is a fungal toxin Sirodesmin A and the other Carfilzomib comes from a bacterium.

This shows that there may be existing drugs that can be repurposed to target new causes of disease, which can be cheaper than having to develop and produce new ones.

The research, led by Queen Mary and published inMolecular Cancer, is the first evidence for the genes NEK2 and INHBA causing chemoresistance in head and neck squamous cell carcinoma (HNSCC) and gene silencing of either gene overturning chemoresistance to multiple drugs.

The scientists first used a method known as data mining to identify genes that may be affecting tumour responsiveness to drug therapy. They tested 28 genes on 12 strains of chemoresistant cancer cell lines, finding 4 significant genes that were particularly responsive that they then investigated further and tested multidrug-resistance.

Dr Muy-Teck Teh, senior author of the study from Queen Mary University of London, said:These results are a promising step towards cancer patients in the future receiving personalised treatment based on their genes and tumour type that give them a better survival rate and treatment outcome.

Unfortunately, there are lots of people out there who do not respond to chemotherapy or radiation. But our study has shown that in head and neck cancers at least it is these two particular genes that could be behind this, which can then be targeted to fight against chemoresistance.

Treatment that doesnt work is damaging both for the NHS and patients themselves. There can be costs associated with prolonged treatment and hospital stays, and its naturally extremely difficult for people with cancer when their treatment doesnt have the results they are hoping for.

90% of all head and neck cancers are caused by HNSCCs, with tobacco and alcohol use being key associations. There are12,422 new cases of head and neck cancer each year, and the overall 5-year survival rate of patients with advanced HNSCC is less than 25%. A major cause of poor survival rates of HNSCC is because of treatment failure that stems from resistance to chemotherapy and/or radiotherapy.

Unlike lung and breast cancer patients, all HNSCC patients are treated with almost the same combinations of treatment irrespective of the genetic makeup of their cancer.

Author: Laurence LeongSource: Queen Mary University LondonContact: Laurence Leong Queen Mary University LondonImage: The image is credited to Neuroscience News

Original Research: Open access.Identification of multidrug chemoresistant genes in head and neck squamous cell carcinoma cells by Muy-Teck Teh et al. Molecular Cancer

Abstract

Identification of multidrug chemoresistant genes in head and neck squamous cell carcinoma cells

Multidrug resistance renders treatment failure in a large proportion of head and neck squamous cell carcinoma (HNSCC) patients that require multimodal therapy involving chemotherapy in conjunction with surgery and/or radiotherapy. Molecular events conferring chemoresistance remain unclear.

Through transcriptome datamining, 28 genes were subjected to pharmacological and siRNA rescue functional assays on 12 strains of chemoresistant cell lines each against cisplatin, 5-fluorouracil (5FU), paclitaxel (PTX) and docetaxel (DTX).

Ten multidrug chemoresistance genes (TOP2A, DNMT1, INHBA, CXCL8, NEK2, FOXO6, VIM, FOXM1B, NR3C1 and BIRC5) were identified. Of these, four genes (TOP2A, DNMT1, INHBA and NEK2) were upregulated in an HNSCC patient cohort (n=221). Silencing NEK2 abrogated chemoresistance in all drug-resistant cell strains. INHBA and TOP2A were found to confer chemoresistance in majority of the drug-resistant cell strains whereas DNMT1 showed heterogeneous results.

Pan-cancer Kaplan-Meier survival analysis on 21 human cancer types revealed significant prognostic values for INHBA and NEK2 in at least 16 cancer types. Drug library screens identified two compounds (Sirodesmin A and Carfilzomib) targeting both INHBA and NEK2 and re-sensitised cisplatin-resistant cells.

We have provided the first evidence for NEK2 and INHBA in conferring chemoresistance in HNSCC cells and siRNA gene silencing of either gene abrogated multidrug chemoresistance. The two existing compounds could be repurposed to counteract cisplatin chemoresistance in HNSCC.

This finding may lead to novel personalised biomarker-linked therapeutics that can prevent and/or abrogate chemoresistance in HNSCC and other tumour types with elevated NEK2 and INHBA expression. Further investigation is necessary to delineate their signalling mechanisms in tumour chemoresistance.

See more here:

Two Key Genes Revealed in Chemotherapy Resistance - Neuroscience News

The Global Mining Drills Market is forecasted to grow by USD 5267.61 mn during 2022-2027, accelerating at a CAGR of 7.61% during the forecast period -…

ReportLinker

Global Mining Drills Market 2023-2027. The mining drills market is forecasted to grow by USD 5267.61 mn during 2022-2027, accelerating at a CAGR of 7.61% during the forecast period.

New York, Sept. 04, 2023 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global Mining Drills Market 2023-2027" - https://www.reportlinker.com/p05242276/?utm_source=GNW The report on the mining drills market provides a holistic analysis, market size and forecast, trends, growth drivers, and challenges, as well as vendor analysis covering around 25 vendors.The report offers an up-to-date analysis regarding the current market scenario, the latest trends and drivers, and the overall market environment. The market is driven by increase in demand for precious metals, rise in demand for housing projects globally, and growing mineral and metal exploration activities.

The mining drills market is segmented as below:By Product Hydraulic breakers Rock breakers Crawler drills Rotary drills

By Application Surface mining drills Underground mining drills

By Geography APAC North America Europe South America Middle East and Africa

This study identifies the rise in automation in mining as one of the prime reasons driving the mining drills market growth during the next few years. Also, increase in environment-friendly mining equipment and processes and increasing demand for customized mining drills will lead to sizable demand in the market.The report on the mining drills market covers the following areas: Mining drills market sizing Mining drills market forecast Mining drills market industry analysis

The robust vendor analysis is designed to help clients improve their market position, and in line with this, this report provides a detailed analysis of several leading mining drills market vendors that include Atlas Copco AB, Boart Longyear Ltd., Caterpillar Inc., FLSmidth and Co. AS, FURUKAWA Co. Ltd., Geodrill Ltd., Hitachi Ltd., Komatsu Ltd., Matrix Design Group LLC, Metso Outotec Corp., Murray and Roberts Holdings Ltd., Robit Plc, ROCKMORE International Inc., Sandvik AB, Sulzer Management Ltd., TEI Rock Drills, and Universal Field Robots. Also, the mining drills market analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage all forthcoming growth opportunities.The study was conducted using an objective combination of primary and secondary information including inputs from key participants in the industry. The report contains a comprehensive market and vendor landscape in addition to an analysis of the key vendors.The publisher presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters such as profit, pricing, competition, and promotions. It presents various market facets by identifying the key industry influencers. The data presented is comprehensive, reliable, and a result of extensive research - both primary and secondary. The market research reports provide a complete competitive landscape and an in-depth vendor selection methodology and analysis using qualitative and quantitative research to forecast the accurate market growth.Read the full report: https://www.reportlinker.com/p05242276/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Original post:

The Global Mining Drills Market is forecasted to grow by USD 5267.61 mn during 2022-2027, accelerating at a CAGR of 7.61% during the forecast period -...

US (NE): Agronomy and Horticulture seminar series begins … – Verticalfarmdaily.com: global indoor farming news

The fall Agronomy and Horticulture seminar series starts with Experiences and Lessons in Growing an Impactful, Local On-Farm Research Program in South Central Nebraska, presented by Nebraska Cropping Systems Extension Educator Sarah Sivits on September 8.

Sivits will present the steps she has taken and the lessons learned in her role as a Nebraska Extension Educator growing a locally dynamic on-farm research presence over six to seven years in south central Nebraska as part of the Nebraska On-Farm Research Network.

This seminar will be in Keim Hall, Room 150, and streamed live.

All seminars are free and open to the public. Seminars will be in person, streamed live at 3:30 p.m. CST/CDT, and recorded unless otherwise noted. Refreshments will be served at 3 p.m.

With four external speakers and 10 distinguished representatives from our department, IANR, and the University of Nebraska-Lincoln, we will be covering a variety of topics related to research, extension, and teaching. Youll get to hear from faculty members, grad students, postdocs, and alumni, said Guillermo Balboa, co-chair of the Agronomy and Horticulture Seminar Committee.

Dates and topics for the rest of the series are as follows:

September 15: From Data Mining to Pleiotropic Effects, Environmental Interactions, and Phenomic Predictions of Natural Genetic Variants in Sorghum and Maize, Ravi Mural, research assistant professor, Department of Agronomy and Horticulture, Center for Plant Science Innovation, University of NebraskaLincoln.

September 22: Scaling On-Farm Research in Image-Based Fertigation with Customer-Driven Development, Jackson Stansell, Founder and CEO, Sentinel Fertigation Lincoln, Nebraska

September 29: Tracking Invisible Threats: A Comprehensive Study of Brucellosis and Leptospirosis Infectious Diseases at Human-livestock Wildlife Interface in Tanzania, East Africa, Shabani Muller, graduate research assistant, School of Natural Resources, University of NebraskaLincoln

October 6: RNA Interference for Insect Pest Management, Ana Maria Velez, professor, Department of Entomology, University of NebraskaLincoln

October 12: Delivering Soil Health Knowledge to the Farmer, Cristine Morgan, chief science officer, Soil Health Institute Morrisville, North Carolina, adjunct professor, Texas A&M University

October 20: Where and How can Instructors Assess Science Practices in Undergraduate Biology Courses?, Brian Couch, Susan J. Rosowski associate professor, School of Biological Sciences, University of NebraskaLincoln

October 27: The Land-Grant Water & Cropping System Educator Insights, Opportunities, and Challenges, Nathan Mueller, extension water and cropping systems educator, Nebraska Extension, University of NebraskaLincoln

November 3: Open Data for Improved Cropland Nutrient Budgets and Nutrient Use Efficiency Estimations, Cameron Ludemann, researcher, Wageningen University and Research, Netherlands

November 10: Linking the Modification of Biochar Surface by Iron Oxides Under Field Conditions With Enhanced Nitrate Retention, Britt Fossum, agronomy doctoral student in environmental studies, University of NebraskaLincoln

November 17: Exploring Maize Resilience Through Genetics, Phenomics, and Canopy Architecture, Addie Thompson, assistant professor, Department of Plant, Soil and Microbial Sciences, Michigan State University. Cohost with CROPS, a graduate student and postdoc group funded and supported through the Center for Plant Science Innovation. Social following the seminar.

December 1: Tough Pests Call for Team Solutions: Building a Coalition for Wheat Stem Sawfly, Katherine Frels, assistant professor, Department of Agronomy and Horticulture, University of NebraskaLincoln

December 8: One Health: Linking Human, Animal, Plant, and Ecosystem Health in Nebraska and Beyond, Liz VanWormer, director, Nebraska One Health, associate professor, School of Veterinary Medicine and Biomedical Sciences, School of Natural Resources, University of NebraskaLincoln

December 15: Experiential Learning and Community Engagement in SCIL 101, Jenny Dauer, associate director for undergraduate education, associate professor in science literacy, School of Natural Resources, University of NebraskaLincoln.

Source: agronomy.unl.edu

Follow this link:

US (NE): Agronomy and Horticulture seminar series begins ... - Verticalfarmdaily.com: global indoor farming news

Business intelligence technology and e-commerce | Mint – Mint

Business Intelligence (BI) plays a critical role in assisting organizations to make informed decisions and gain a competitive edge in todays dynamic business environment. With the increasing significance of e-commerce as a business medium, BI has undergone continuous transformation to meet the evolving needs of the industry. The evolution of BI started from its roots in Management Information Systems (MIS) and progressed to the integration of external data and advanced analytical capabilities. The goal is to provide current and relevant information for businesses to thrive in challenging economic times.

A Business Intelligence process involves transforming data into information and subsequently into knowledge that can be used for decision-making. Technology and human capital both played pivotal roles in leveraging data to gain a competitive advantage. BI is a diverse field encompassing data analysis, data mining, querying, and management reporting. Its primary purpose is to support a wide range of business applications and aid strategic decision-making in organizations.

Personalized Marketing: Customer segmentation is essential for delivering tailored marketing strategies that result in higher conversion rates and customer satisfaction. BI tools like Tableau, Power BI, Quik Sense, Google Analytics, Adobe Analytics, RapidMiner, and Domo allow businesses to segment their audiences based on buying behaviour, demographics and preferences.

Inventory Management: Effective inventory management is essential for ensuring that a business has the right amount of inventory on hand to meet customer demand while minimizing carrying costs and preventing overstock or stockouts. By analysing previous sales data and market patterns, BI solutions like as Microsoft Power BI, Oracle BI, Looker, Sisense, and Zoho Analytic aid with future trends and demand forecasting.

Conversion Rate Optimization: Google Analytics, Adobe Analytics, Mixpanel, Heap Analytics, Crazy Eggs, and Hotjar are some of the most popular tools for identifying bottlenecks in the purchasing process by analysing user behaviour on e-commerce platforms. These insights aid in the optimisation of websites for improved conversion rates, converting users into consumers.

Competitive Insights: The macro environment like competitors is equally crucial in driving the business. SimilarWeb, SEMrush, SpyFu, Alexa, and Crayon are BI tools that provide eCommerce organisations with insight into their competitors' strategies. Businesses may stay competitive by keeping track of competitor pricing, product offerings, and customer reviews.

The success of BI initiatives depends on several factors which include user access, cross-functional integration, data quality, leadership commitment, analytical decision-making ability, flexibility, and user satisfaction. A holistic approach to BI implementation is crucial for transforming business processes and achieving organizational goals.

With the constant evolution of BI technologies and the availability of large data resources, organizations can gain a competitive edge by harnessing data-driven insights and enabling them to respond quickly to market shifts. As the business landscape continues to change rapidly, BI will continue to be the driving engine for e-commerce businesses, enabling them to adapt, innovate, and excel in their endeavours.

(Ashish Kumar Biswas is assistant professor at NMIMS, Hyderabad)

Read more from the original source:

Business intelligence technology and e-commerce | Mint - Mint