Research highlights
Our academics work with industry, government and community organisations on specific projects, transferring our research into practice.
We’re making an impact that matters with research across many areas:
-
Summary of the impact
Ground-breaking secure low-cost Internet of Things (IoT) solutions developed at UNSW since 2015 have been deployed by WBS Technology, an Australian company specalising in smart building technology, in over fifty smart buildings in Brisbane, Melbourne and Sydney. This has improved operational efficiency in the buildings and increased industry and service provider awareness of new IoT technologies made possible by LoRa, a popular communication technology of emerging low-power wide area networks.
Underpinning research and development
Associate Professor Hu, Professor Kanhere and Professor Jha at UNSW pioneered IoT communication research that led to secure, inexpensive full-coverage wireless meshed network technology based on LoRa, a proprietary low-power wide area network modulation techniTue. The production system of the UNSW technology, developed by WBS Technology, won the Australian Computer Society Gold Disruptor award in 2018. Hu and Jha were among the first to detail a systematic design methodology along with seamlessly integrated network and security for IoT. The research is considered by others in the field to be pioneering in introducing the LoRa mesh network.
When emergency lights in a building are connected to the Internet, regulated emergency light testing and maintenance can be conducted remotely and with greater efficiency. And by exploiting these evenly distributed lights, an inexpensive, secure and robust wireless communication infrastructure can be established. The research was extended by Hu and his team and applied to an indoor farming IoT funded by the Future Food Systems Cooperative Research Centre. This project developed a framework to allow third-party sensors and control systems in indoor farming and greenhouses to be seamlessly integrated into a secure LoRa meshed network backbone over Bluetooth Low Energy (a form of Bluetooth with low power consumption and, therefore, low cost). The solution allows climate conditions in indoor farms and greenhouses to be remotely monitored and controlled, increasing crop yield and reducing resource and energy consumption.
References to the research
- J. Y. Kim, W. Hu, H. Shafagh and S. Jha: "SEDA: Secure Over-the-Air Code Dissemination Protocol for the Internet of Things", IEEE Transactions on Dependable and Secure Computing (TDSC), vol. 15, no. 6, pp 1041-1054, 2018.
- Weitao Xu, Jin Zhang, Jun Young Kim, Walter Huang, Salil Kanhere, Sanjay Jha and Wen Hu: "The Design, Implementation, and Deployment of a Smart Lighting System for Smart Buildings" IEEE Internet of Things Journal, vol. 6, no. 4, pp 7266-7281, Aug. 2019.
- Kim, Jun Young, Hu, Wen, Sarkar, Dilip and Jha, Sanjay: "ESIoT: Enabling Secure Management of the Internet of Things" Proceedings of the 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec), pp 219-229, doi:10.1145/3098243.3098252, 2017.
- Liu, Jun, Xu, Weitao, Jha, Sanjay and Hu, Wen: "Nephalai: Towards LPWAN C-RAN with Physical Layer Compression" Proceedings of the 26th Annual International Conference on Mobile Computing and Networking (MobiCOM), 2020.
- J. Gao, W. Xu, S. Kanhere, S. Jha, W. Gong, J. Kim, W. Huang and W. Hu: "A Novel Model- Based Security Scheme for LoRa Key Generation" 20th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), 2021.
-
Summary of the impact
Security is a crucial technology design consideration, and L4 microkernels have long set the bar for operating system (OS) enforcement of security. The latest member of the L4 family, the seL4 microkernel, designed and developed at UNSW and what was then NICTA (National ICT Australia), was the first ever OS kernel with a machine-checked formal (mathematical) proof of implementation correctness -- freedom from bugs. It is still the OS kernel with the strongest formal assurance story, connecting the executable binary code to high-level statements of security enforcement. seL4 is being deployed in security- and safety-critical systems around the world, from communication devices for national security to space satellites, critical infrastructure and commercial cars.Â
Beyond that, the seL4 project has had massive indirect impact. By demonstrating that formal verification can be applied to real-world systems, the work has triggered a wealth of other verification work in academia as well as industry. seL4 has won many awards, including the ACM SIGOPS Hall of Fame Award, and the ACM Software System Award. seL4-based products have won iAwards and Eureca Awards. Several large research programs of the US Defence Advanced Research and Projects Agency (DARPA) as well as the German Cyberagentur directly build on seL4 and contribute to the further development of its ecosystem.
Underpinning research and development
Driven by the desire to make computer systems more dependable, ±«±·³§°ÂÌýScientia Professor and John Lions Chair of Operating Systems Gernot Heiser and his Trustworthy Systems Research Group have developed several microkernels. They include the L4-embedded microkernel, marketed as OKL4 by Heiser's start-up company, Open Kernel Labs (acquired by General Dynamics in 2012), and a variant of which is deployed in billions of Apple iOS devices
The seL4 microkernel was the logical next step in the evolution of L4 microkernels: A complete re-design, based on the experience of the earlier work, with the explicit aim of mathematically proving the correct operation and security enforcement.
A microkernel is the minimum amount of privileged-mode software required to implement the OS of a computing device. Unlike traditional kernels, such as Linux and Windows, a microkernel design moves functionality, such as device drivers, file systems and networking, into user space (i.e., executing in the unprivileged mode of the hardware) rather than containing them within a monolithic kernel. This approach has several advantages. Code that runs in privileged mode can bypass any security mechanism. Restricting privileged execution to a microkernel of around 10,000 lines of code dramatically reduces the 'attack surface' compared to a monolithic kernel of tens of millions of lines of code. Another advantage is that the modular system design enabled and encouraged by microkernels result in increased stability: If one of the system functions deployed in user space fails, it will not affect the entire OS. For the end user this increases the level of security of any device, reducing the likelihood of hacking or unauthorised access. The key challenge is to make the microkernel both robust and fast.
Research on the L4 microkernel as a dependable OS platform has been conducted at UNSW under the leadership of Professor Heiser since the mid-90s. With the creation of NICTA in 2003, the focus shifted to securing embedded systems: developing an OS microkernel that could address challenges anticipated in the development of embedded systems software. Heiser predicted that the growing functionality and complexity of embedded systems would mean that unprotected real-time operating systems, traditionally used in embedded systems, would soon reach their use-by date. This was identified as an opportunity to push modern OS technology into the rapidly changing space of embedded systems, such as mobile phones.
Heiser's team released the L4-embedded microkernel as open source in order to ease uptake. Early adoption by Qualcomm, the leading manufacturer of wireless communication chips for mobile phones and other wireless communication devices, led to Heiser creating a start-up company, Open Kernel Labs (OK Labs). The company was
headquartered in the US but retained its engineering in Sydney. It marketed the microkernel technology under the product name OKL4. By 2008, OKL4 was shipping on all Qualcomm communication chips, representing deployment of more than half a billion units per year. Since 2013, an adaptation of OKL4 has also been shipping on the Secure Enclave of all Apple iOS devices (iPhones, iPads, etc.), representing more than 350 million further deployments per year. OKL4 was also designed into a number of military-grade secure communication devices which were adopted by the national security services of several countries.
OK Labs was acquired by US-based General Dynamics in 2012. At the time of acquisition, it employed about 30 staff, the majority based in Sydney. The original OK Labs engineering team soon created several new companies.
In parallel to commercial deployment, research continued with the aim of establishing provable trustworthiness. This led to the development of a revolutionary new microkernel, seL4, the first OS kernel with a formal (mathematical) proof of implementation correctness - the proved absence of bugs. This is something that had been attempted since the 1970s, and was finally made possible by the combination of a careful design of the seL4 system and leveraging progress in formal verification techniques. The result made headlines around the technology-development world. In 2011 the work was featured in MIT Tech Review's TR10 list of the year's ten innovations expected to have the greatest future impact. The publication[1] in the top OS conference, SOSP, has attracted 2,700 citations (by 2023), won the ACM SIGOPS Hall of Fame Award in 2019, and the ACM Software System Award in 2023.
This was followed by more world-firsts: Proofs that seL4 enforces the core security properties of confidentiality, integrity and availability, and the first complete and sound worst-case execution-time analysis of a protected- mode operating system.[2]
Remarkably, all this was achieved without compromising performance: seL4 is not only the world's most secure microkernel, it is also the fastest.Â
In 2014 the seL4 work directly motivated a large research program by the US Defense Advanced Research Projects Agency (DARPA), where the creators of seL4 collaborated with US companies Boeing, Rockwell Collins, Galois and the University of Minnesota to transition the seL4 technology to real-world military systems. These included a Boeing-built autonomous helicopter, an autonomous US Army truck, a smart soldier helmet, and a space satellite platform for the US Navy. In 2021, DARPA offered a research model of the helicopter for a "steal this drone" challenge at the DEFCON hacker conference - no one succeeded in hacking it.
The creation of the seL4 Foundation in 2020 established an open and neutral organisation to guide, coordinate and fund ongoing development of the seL4 ecosystem and real-world deployments. The Foundation membership includes companies and government organisations from around the world.
The economic impact of the work of Prof Heiser's team goes beyond direct deployment of the technology. Students and staff involved in the project at various stages have created several related companies, including: Apkudo (award wining phone supply chain management), Breakaway Consulting, Exablaze (sold to CISCO in 2020), Metamako (sold to Arista Networks in 2018).
References to the research
- G. Klein, K. Elphinstone, G. Heiser, J. Andronick, D. Cock, P. Derrin, D. Elkaduwe, K. Engelhardt, R. Kolanski, M. Norrish, et al: "seL4: Formal verification of an OS kernel." Proc. 22nd ACM Symposium on Operating Systems Principles, Big Sky, MT, USA, October, 2009. Best Paper. Winner of the 2019 ACM SIGOPS Hall of Fame Award.
- G. Klein, J. Andronick, K. Elphinstone, T. Murray, T. Sewell, R., Kolanski, G. Heiser: "Comprehensive Formal Verification of an OS Microkernel." ACM Trans. Comp. Syst., 32(1), 2:1-2:70, (Feb 2014).
- G. Klein, K. Elphinstone, G. Heiser, J. Andronick, D. Cock, P. Derrin, D. Elkaduwe, K. Engelhardt, R. Kolanski, M. Norrish, et al: "seL4: Formal verification of an OS kernel." Proc. 22nd ACM Symposium on Operating Systems Principles, Big Sky, MT, USA, October, 2009. Best Paper. Winner of the 2019 ACM SIGOPS Hall of Fame Award.
-
Summary of the impact
Pointer analysis (or alias analysis) is a form of program analysis, the foundation for almost all methods for finding software defects and security vulnerabilities. However, precise pointer analysis algorithms cannot scale beyond programs with merely thousands of lines of code. Over the last ten years, Professor Xue of UNSW developed SVF, short for Static Value Flow, an analysis tool capable of computing the points- to or alias information and value-flow in millions of lines of C/C++ code. SVF is now used widely in academia and industry for compiler optimisation, bug detection, and security analysis. It has generated SVF- based research projects in the USA, UK, EU, China and other countries.
Underpinning research and development
Professor Xue's group started developing the foundation of SVF and its associated algorithms and implementation techniques about 10 years ago. In 2012, Professor Xue and two PhD students laid the foundation of sparse data-flow analysis through memory leak detection. This turned out to be a very sensitive method of detection, with a low initial false positive rate of 18.5%. By iteratively propagating points-to information along sparse previously computed def-use chains, SVF could obtain increasingly precise results. In 2016, Xue and a PhD student introduced the first flow- and context-sensitive demand-driven value-flow refinement algorithm, improving both the scalability and precision of their earlier analysis. SVF was officially released at Github that year.
Subsequently, Xue's group have demonstrated how to leverage SVF to provide control-flow integrity, proof against control-flow hijacking attacks; how to detect vulnerability to a class of exploit known as use- after-free; and how to mitigate code-reuse attacks based on return- oriented programming - all for large-scale C/C++ code. Xue's group has also developed advanced program analysis techniques for million- line Java and Android applications with improved scalability and precision under a value-flow reachability framework founded on Context-Free-Language (CFL) reachability. SVF is an open-source framework hosted at .
References to the research
- Yulei Sui, Ding Ye and Jingling Xue: "Static Memory Leak Detection Using Full-Sparse Value-Flow Analysis." International Symposium on Software Testing and Analysis (ISSTA'12), pp 254-264, 2012.
- Yulei Sui and Jingling Xue: "On-Demand Strong Update Analysis via Value-Flow Refinement." ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE'16), 2016.
- Yulei Sui and Jingling Xue: "SVF: Interprocedural Static Value-Flow Analysis in LLVM." 2016 International Conference on Compiler Construction (CC'16), pp 265-266, 2016.
- Xiaokang Fan, Yulei Sui, Xiangke Liao, and Jingling Xue: "Boosting the Precision of Virtual Call Integrity Protection with Partial Pointer Analysis for C++." 26th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA'17), pp 329-340, Santa Barbara, 2017.
- Hua Yan, Yulei Sui, Shiping Chen and Jingling Xue: "Spatio-Temporal Context Reduction: A Pointer-Analysis-Based Static Approach for Detecting Use-After-Free Vulnerabilities." 40th International Conference on Software Engineering (ICSE'18), pp 327-337, Gothenburg, 2018.
- Changwei Zou and Jingling Xue: "Burn After Reading: A Shadow Stack with Microsecond- level Runtime Rerandomization for Protecting Return Addresses." 42nd International Conference on Software Engineering (ICSE'20), pp 258-270, 2020.
-
Summary of the impact
Ripple-Down Rules (RDR) allow end-users to build their own knowledge bases themselves, without needing a knowledge engineer, or needing to become a knowledge engineer. Rules are added to a knowledge base while it is already in use, whenever a case is noted for which the system has not made an appropriate decision. Because of its ease of use, RDR has had significant industry uptake, with at least nine companies, including IBM, providing RDR technology or RDR-based services across a range of industry application [1].
One of these, provides RDR for chemical pathology laboratories to add diagnostic and management advice to laboratory reports. There are over 800 PKS RDR knowledge bases in use worldwide, some with over 10,000 rules. These have been developed largely by chemical pathologists themselves, after a day or two of training, and the data shows that it only takes a minute or two on average to add a rule [2,3]. These 800 plus knowledge bases are all separately developed, rather than the same knowledge base deployed in many places, and so represent a very significant uptake of AI technology in medicine. PKS was floated on the Australian Stock Exchange in early 2019.
Underpinning research and development
It became obvious in maintaining an early medical expert system, GARVAN-ES1, that the 'knowledge engineering bottleneck', is not because experts have difficulty in reporting on their mental processes, rather what they do is identify features in the data which justify their conclusion as opposed to other possible conclusions in the context [4]. There are various versions of RDR, but the essential features RDR as described in [1] are:
- Rules are added to deal with specific cases to either replace an incorrect conclusion given for a case, or to add a conclusion. Rules are only added, never removed or edited.
- 'Cornerstone cases' are cases for which a rule has been added. If a new rule gives a different conclusion for a stored cornerstone case, the cornerstone is shown to the user. The principle of rationality requires that the user must be able to identify some feature that distinguishes the case from the cornerstone case or accept that they should have the same conclusion.
- RDR are linked production rules. Every rule not only specifies a rule action, such as asserting a fact, but also specifies which rule will be evaluated next, depending on whether it evaluates true or false. This completely determines the rule evaluation sequence so there is no need for an inference engine conflict resolution strategy as required with conventional production rules.
- A rule reached by an if-true link can either replace the conclusion given or add an extra conclusion, as specified by the user (or by the type of RDR used).
There are also extensions to RDR that allow for repeat inference for tasks such as configuration. A key idea is that RDR rules should only assert facts, not retract them. If an incorrect conclusion is asserted it should not be retracted in some later inference cycle, rather a rule should be added to prevent the incorrect conclusion being asserted in the first place [1].The first RDR system in routine use was the PEIRS system in Chemical Pathology at St Vincent's hospital. It was a large system, with 2000 rules, but was based on single-classification RDR [5]. The most widely used form of RDR is now multiple-classification RDR (MCRDR), able to provide multiple independent conclusions for a set of data, which was developed by Byeong Kang as part of his PhD. Data on PKS experience with its version of MCRDR, shows that end- users, gradually over time, can build systems with many thousands of rule, taking only a minute or two per rule [2, 3].
A range of RDR algorithms have been developed for different types of applications, but the notion of linked production rules outlined above provides a generalisation of these various algorithms. Other RDR research includes:
- An RDR representation has been used in a number of machine learning algorithms. RIDOR from the WEKA is very widely used, based on Brian Gaines Induct-RDR.
- There has been a range of research on prudence and credentials to produce RDR systems which are able to recognise when a case is outside their range of experience.
- There has also been research on RDR that can generalise from what they have learned.
- Of particular significance, there has been considerable research on using an RDR system as a 'wrapper' around another system, e.g, a system built by machine learning. That is, rules are added to correct errors from an underlying system, or to customise the underlying system to a particular domain. This can result in a huge improvement [6].
The RDR book provides outlines of the known industry and research- demonstrated applications [1]. It argues that RDR can be more suitable than machine learning for many problems, as machine learning requires sufficient cases to be reliably labelled with all the classifications that it is important to learn. Such data is rarer than might be expected and can be costly to prepare. Given the low cost of adding RDR rules, an RDR system can be built as part of the labelling process. This not only produces a working system so that learning is no longer required but avoids issues with rare classes and unbalanced data.
References to the research
- Compton, P. and Kang B.H (2021) Ripple-Down Rules: The Alternative to Machine Learning, CRC Press (Taylor and Francis).
- Compton, P. (2013). 'Situated cognition and knowledge acquisition research.' International Journal of Human-Computer Studies 71: 184-190. (This was one of 11 invited papers for a special issue of the journal to commemorate 25 years of knowledge acquisition research).
- Compton, P., L. Peters, T. Lavers and Y. Kim (2011). 'Experience with long-term knowledge acquisition.' Proceedings of the Sixth International Conference on Knowledge Capture, KCAP 2011, Banff, ACM. 49-56.
- Compton, P. and R. Jansen (1990). 'A Philosophical Basis for Knowledge Acquisition.' Knowledge Acquisition 2: 241-257.
- Edwards, G., P. Compton, R. Malor, A. Srinivasan and L. Lazarus (1993). 'PEIRS: a pathologist maintained expert system for the interpretation of chemical pathology reports.' Pathology 25: 27-34.
- Ho, V. H., P. Compton, B. Benatallah, J. Vayssiqre, L. Menzel and H. Vogler (2009). 'An Incremental Knowledge Acquisition Method for Improving Duplicate Invoice Detection.' Proceedings of the 25th IEEE International Conference on Data Engineering, ICDE 2009, Shanghai, IEEE. 1415-1418.
-
Summary of the impact
A series of funded projects over 2012-2020 that combine social science and machine learning methods - a subset of Artificial Intelligence - underpin our work in genocide forecasting. These projects were co-led initially by Professor Benjamin Goldsmith (currently at ANU) and Prof Arcot Sowmya, UNSW, and later with the addition of A/Prof Charles Butcher, NTNU Norway. This work had the overall purpose of improving capacity for forecasting mass atrocities and genocide globally and in the Asia-Pacific region specifically.
Central to the success of the projects and their specific outcomes, considerable intellectual advances have been made in understanding quantitative modelling of rare and infrequent events such as political instability, genocide, and politicide. This forms the foundation for the quality work the projects have produced, but can also serve as a basis for future research in this area. The process of consultation with various stakeholders throughout the projects has been especially useful and informative, and improved the relevance and substance of the final result.
Underpinning research and development
On average, over a million casualties per year were reported between 1946 and 2016 due to Targeted Mass Killing (TMK). Advances in Social Science and Artificial Intelligence allow for reasonable forecasting of genocide or politicide events. This area, however, has fewer quantitative studies compared to interstate war and terrorism. This gap has been covered by an international team of researchers led by Professor Benjamin Goldsmith, currently at ANU, and Prof Arcot Sowmya, UNSW. Forecasting these rare but catastrophic events of mass violence is important for several reasons. Forecasts can assist in prevention and intervention efforts, and enable more efficient management of scarce resources that may lead to reducing severity of genocide.
The project contributed to improved forecasting methods that allowed researchers to verify past forecasting for 2011-2015 and make future prediction of the mass atrocity events for 2016-2020 [19]. These forecasts have been made public on the .
Outcomes include seven journal articles; one fully refereed international conference paper; two policy reports completed and distributed to stakeholders; one website hosted at the Australian National University and now live; fifteen presentations to policy-relevant audiences in Australia, the United States, Europe, and Asia. Each of these sets of outcomes meets or exceeds the targets set in the grant proposal in quality and quantity.
An important contribution is the recently released dataset of targeted mass killing together with detailed codebook and a publication in the influential Journal of Conflict Resolution [1].
Presentations aimed at demonstrating the potential utility for policy applications of the projects were made at numerous organisations and venues, including the following:
- Dartmouth College, Hanover, NH, USA, 21 March 2014.
- Department of Foreign Affairs and Trade (4 June 2012, Canberra, attended by staff of DFAT, Ausaid, Defence, DECO (export control office), DIO, AGs, 91³ÉÈ˰涶Òôand the High Commission of Canada.
- Office of National Assessments (3 May 2012, Canberra).
- Invited Participant, Identifying the Contributing Factors and Drivers of Future Mass Atrocities, International Institutions and Global Governance Program Meeting Series on Making US Multilateralism work, Council for Foreign Relations, Washington DC, USA, April 16, 2012.
- Invited Participant, Forecasting Political Instability, Genocide, and Mass Atrocities, Center for Preventive Action Roundtable Lunch, Council for Foreign Relations, Washington DC, USA, April 16, 2012.
- Harvard University, John F. Kennedy School of Government, Carr Center for Human Rights (5 April 2012, Cambridge, MA, USA).
Many presentations were made by Prof Goldsmith, as they were to policy-making or advisory bodies, while Prof Sowmya attended some of the meetings and participated in discussions, specifically at DFAT, Canberra in June 2012 and a series of meetings in Washington DC in April 2012.
In addition, there were numerous consultations about our work and findings when an interest was expressed, as well as media coverage and blogs, including Harvard University, ABC Radio, United States National Holocaust Memorial Museum and National Security College. The TMK dataset, currently hosted at the Australian National University site, will be updated to cover 2018-2020 genocide and politicide events. This will be followed by an evaluation of forecasts made previously for 2017-2020. Research continues at UNSW Sydney to further improve forecasting and improving the forecasting methodology.
References to the research
- Butcher, C., Goldsmith, B. E., Nanlohy, S., Sowmya, A., & Muchlinski, D. (2020). Introducing the Targeted Mass Killing Data Set for the Study and Forecasting of Mass Atrocities. Journal of Conflict Resolution, 0022002719896405.
- Goldsmith, B., Semenovich, D., Sowmya, A., & Grgic, G. (2017). Political Competition and the Initiation of International Conflict: A New Perspective on the Institutional Foundations of Democratic Peace. World Politics, 1-39.
- Benjamin E. Goldsmith, C. R. Butcher, D. Semenovich and A. Sowmya, Forecasting the Onset of Genocide and Politicide: Annual Out-of-Sample Forecasts on a Global Dataset, 1988-2003, Journal of Peace Research 50, p.437- 452, July 2013.
- Butcher, C., & Goldsmith, B. E. (2017). Elections, ethnicity, and political instability. Comparative Political Studies, 50(10), 1390-1419.
- Goldsmith, B. E., & Butcher, C. (2018). Genocide forecasting: Past accuracy and new forecasts to 2020. Journal of Genocide Research, 20(1), 90-107.
- Muchlinski, D., Siroky, D., He, J., & Kocher, M. (2016). Comparing random forest with logistic regression for predicting class-imbalanced civil war onset data. Political Analysis, 87-103. Peace Research, 50(4), 437-452.
- Sascha Nanlohy, Charles Butcher & Benjamin E Goldsmith (2017) The Policy Value of Quantitative Atrocity Forecasting Models, The RUSI Journal, 162:2, 24-32.