About
Dr. Philip Weber, FHEA, MSc (Distinction) BSc (Hons). OrcID 0000-0002-3121-9625
I am a lecturer in computer science in Aston University’s School of Computer Science and Digital Technologies, and deputy director of the Forensic Data Science Laboratory (FDSL). FDSL is part of the Aston Institute for Forensic Linguistics (AIFL). I am also a member of the new Aston Centre for AI Research and Applications (ACAIRA).
With a background in systems analysis, design and administration in industry, I now focus on machine learning and AI, specialising in forensic voice comparison and forensic data science. I am also interested in automatic speech recognition (ASR), automatic speaker recognition, and business process mining.
Until 2019 I worked for the excellent Think Beyond Data programme, also at Aston, offering free consultancy in data analytics and machine learning to SMEs in the Greater Birmingham, Black Country and Marches areas of the UK West Midlands. Please get in touch if you could benefit from this.
I previously worked at the University of Birmingham on the EPSRC Automated Conflict Resolution in Clinical Pathways (MitCon) and the Speech Recognition by Synthesis (SRbS) projects. The outcomes of these can be found on my publications page.
My Ph.D. research studied Business Process Mining from a machine learning perspective and culminated in my thesis “A framework for the analysis and comparison of Process Mining algorithms” (2014). Please see my publications page for more information. I studied at the School of Computer Science, University of Birmingham under the supervision of Dr. Behzad Bordbar and Dr. Peter Tiňo.
Modelling Medical Care Flows
In the MitCon project we applied formal modelling (extensions to BPMN and Coloured Petri Nets) and verification techniques drawn from fields such as automated software engineering and business process modelling (Z3 and Alloy), to improve the application of medical care flows and guidelines to the treatment of patients with multi-morbidity.
Speech Recognition
Our aim in the SRbS project was to develop new ‘parsimonious’ models of speech, useful for speech recognition, inspired by human speech production and perception. At the root of this lies a seeming disconnect: recent progress in speech recognition has been achieved largely through statistical models with hundreds of thousands of parameters and vast speech corpora; whereas speech is produced by the human vocal tract which is (relatively) low-dimensional. We developed a Continuous-State Hidden Markov Model recogniser and methods to automatically derive low-dimensional representations of speech.