Skip to main content

Trinity College Dublin, The University of Dublin

Menu Search


Trinity College Dublin By using this website you consent to the use of cookies in accordance with the Trinity cookie policy. For more information on cookies see our cookie policy.

      
Profile Photo

Professor Nick Campbell

Fellow Emeritus (Computer Science)
8 WESTLAND SQUARE
      
Profile Photo

Professor Nick Campbell

Fellow Emeritus (Computer Science)
8 WESTLAND SQUARE


Nick Campbell (nick@tcd.ie) is a Professor and Fellow Emeritus, Director of the Speech Communication Lab at Trinity College Dublin (The University of Dublin) in Ireland. He received his Ph.D. degree in Experimental Psychology from the University of Sussex in the U.K. He was invited to TCD in 2008 as Professor of Phonetics and Speech Communication, and was an SFI Stokes Professor until 2015 when he suffered mandatory age-related retirement and was appointed to the School of Computer Science and Statistics as Adjunct Professor. Until 2008 he was a Senior Scientist at the Japanese National Institute of Information and Communications Technology, (as nick@nict.go.jp) and Chief Researcher in the Department of Acoustics and Speech Research, Advanced Telecommunications Research Institute International (as nick@atr.jp), Kyoto, Japan, where he also served as Research Director for the JST/CREST Expressive Speech Processing and the SCOPE "Robot's Ears" projects. He was first invited as a Research Fellow at the IBM U.K. Scientific Centre, where he developed algorithms for speech synthesis, and later at the AT&T Bell Laboratories, where he worked on the synthesis of Japanese. He served as Senior Linguist at the Edinburgh University Centre for Speech Technology Research before joining ATR in 1990. His research interests are based on large speech databases, and include nonverbal speech processing, concatenative (and interactive) speech synthesis, and prosodic information modeling. He was Visiting Professor at the School of Information Science, Nara Institute of Science and Technology (NAIST), Nara, Japan, and Visiting Professor at Kobe University, Kobe, Japan for 10 years. Ranked #3 in TCD and #12 in Ireland by http://guide2research.com/scientists/uni-187 - Google Scholar [h-index 45(21), i10 168(67) October '22] Out of College since Covid! Currently working with Kyoto University on deep learning for expressive speech syntheis - results indistinguishable from natural speech in formal trials - but not written up yet - do I have the funding to continue publications??? where is the 'emeritus' rule book???
  Cognition   Communication engineering, technology   Communication Sciences   Computational linguistics   Computer Science/Engineering   Databases, database management, data mining   Discourse & Dialogue   Human computer interactions   Information Technology   Intelligent agents   Language and technology   Multimedia   Numerical analysis   Remote sensing   Signal processing   Speech Processing   Speech processing/technology   SPEECH PROSODY   Speech synthesis   Virtual Reality
Project Title
 FastNet
From
April 1st 2010
To
March 31st 2015
Summary
Focus on Actions in Social Talk; Network-Enabling Technology (Lead PI 5 years)
Funding Agency
SFI
Programme
PI
Project Type
speech technology & corpus development
Person Months
60
Project Title
 CNGL-II
From
Jan 2012
To
Sept 2015
Summary
Theme Leader - Delivery & Interaction 10meuro (approx) for 30 months
Funding Agency
SFI
Programme
CSET
Project Type
Centre for Next Generation Localisation (part 2)
Person Months
30
Project Title
 METALOGUE
From
Nov 2013
To
Oct 2016
Summary
Multiperspective Multimodal Dialogue: dialogue system with metacognitive abilities The goal of METALOGUE is to produce a multimodal dialogue system that is able to implement an interactive behaviour that seems natural to users and is flexible enough to exploit the full potential of multimodal interaction. It will be achieved by understanding, controlling and manipulating system's own and users' cognitive processes. The new dialogue manager will incorporate a cognitive model based on metacognitive skills that will enable planning and deployment of appropriate dialogue strategies. The system will be able to monitor both its own and users' interactive performance, reason about the dialogue progress, guess the users' knowledge and intentions, and thereby adapt and regulate the dialogue behaviour over time. The metacognitive capabilities of the METALOGUE system will be based on a state-of-the-art approach incorporating multitasking and transfer of knowledge among skills. The models will be implemented in ACT-R, providing a general framework of metacognitive skills. http://cordis.europa.eu/projects/rcn/110655_en.html
Funding Agency
EU
Programme
FP7-ICT
Project Type
STREP
Person Months
2
Project Title
 JOKER
From
Jan 2014
To
Dec 2016
Summary
JOKe and Empathy of a Robot/ECA: Towards social and affective relations with a robot This project will build and develop JOKER, a generic intelligent user interface providing a multimodal dialogue system with social communication skills including humor, empathy, compassion, charm, and other informal socially-oriented behavior. Talk during social interactions naturally involves the exchange of propositional content but also and perhaps more importantly the expression of interpersonal relationships, as well as displays of emotion, affect, interest, etc. This project will facilitate advanced dialogues employing complex social behaviors in order to provide a companion-machine (robot or ECA) with the skills to create and maintain a long term social relationship through verbal and non verbal language interaction. Such social interaction requires that the robot has the ability to represent and understand some complex human social behavior. It is not straightforward to design a robot with such abilities. Social interactions require social intelligence and 'understanding' (for planning ahead and dealing with new circumstances) and employ theory of mind for inferring the cognitive states of another person. JOKER will emphasize the fusion of verbal and non-verbal channels for emotional and social behavior perception, interaction and generation capabilities. Our paradigm invokes two types of decision: intuitive (mainly based upon non-verbal multimodal cues) and cognitive (based upon fusion of semantic and contextual information with non-verbal multimodal cues.) The intuitive type will be used dynamically in the interaction at the non-verbal level (empathic behavior: synchrony of mimics such as smile, nods) but also at verbal levels for reflex small- talk (politeness behavior: verbal synchrony with hello, how are you, thanks, etc). Cognitive decisions will be used for reasoning on the strategy of the dialog and deciding more complex social behaviors (humor, compassion, white lies, etc.) taking into account the user profile and contextual information. http://www.chistera.eu/projects/joker
Funding Agency
EU-FP7
Programme
CHISTERA
Project Type
Collaborative Research
Person Months
3
Project Title
 ADAPT
From
Jan 2015
To
Dec 2020
Summary
Centre for Global Intelligent Digital Content
Funding Agency
SFI
Programme
CENTRES
Project Type
Research & Development

Page 1 of 2
Details Date
Board member: Japan British Association of the Kansai 2005-present
Board member: International Speech Communication Association 2009-resigned in protest 2016
Board Member - European Language Resources Association (ELRA) Nov 2010 - Dec 2016
Vice President - European Language Resources Association Feb 2011 - Dec 2016
Steering Committee member - Digital Humanities Research (TCD) August 2012
Member - META-NET (EU) Vision Group 2010-2012
Member, Spoken Language Technical Committee, IEEE Signal Processing Society Oct 2011 - 2015
member of Management Committee, European Science Foundation (ESF) funded Research Network: COST 2102: Cross- Modal Analysis of Verbal and Non-verbal Communication. 2008-2011
Humaine (EU) Executive Committee member 2007-2011
Member of the Editorial Board: Cognitive Processing 2009-2012
Member of the Editorial Board, Language Resources and Evaluation 2004-2009
Member of the Editorial Board, Phonetica 2003-2004
Associate Editor, Language & Speech 1999-2009
Associate Editor, Journal of the Phonetic Society of Japan 2000-2009
Associate Editor - IEEE Transactions on Speech & Audio 1999-2004
Guest Editor - IEEE Trans SP Special Issue on Speech Synthesis 2001-2002
ISCA Speech Synthesis SIG - Vice Chairman, ILO 1998-2009
External Expert, European Commission, 'Humaine' Network of Excellence. 2007-2008
Speech Synthesis Working Group Convenor COCOSDA 1995-2000
DISC Advisory Panel (member) 1997-1999
Secretary/Communications Officer COCOSDA 1991-present
Guest Editor, IEEE Trans Speech & Audio - Special Issue on Expressive Speech Synthesis 2004-2006
51st Annual Meeting of the Association for Computational Linguistics in 2013. (Programme Committee member) Dec 2012
PC member - 14th International Conference on Computational Linguistics and Intelligent Text Processing Dec 2012
SciCom member - 12th International Conference on Auditory-Visual Speech Processing (AVSP2013) Nov 2012
PC member - Beyond AI: Artificial Dreams BAI 2011, 2012 Jun 2012
Organising Committee member - Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. Jan 2012
International Advisory Committee member - 15th Oriental COCOSDA Conference, Macao, 2012 Jan 2011
OrgCom CogInfoCom 2012 Dec 2012
PC member - BCS/CPHC Distinguished Dissertations competition 2012 Dec 2012
PC member (HLT-10) The Fourth Baltic Conference on Human Language Technologies Aprli 2010
PC member - IWSDS 2012 - International Workshop on Spoken Dialog Systems 2012 Aug 2012
PC member - SocialCom 2012 - ASE/IEEE International Conference on Social Computing Sep 2012
PC member - PLM2012 - 43rd Poznań Linguistic Meeting Mar 2012
PC member - SP2012 - 6th International Conference on Speech Prosody May 2012
PC member - SSPW2009 - Social Signal Processing Workshop Jan 2009
PC member - TAL 2012 - The Third International Symposium on Tonal Aspects of Languages May 2012
PC member - Simpe 2010 - Fifth Workshop on Speech in Mobile and Pervasive Environments May 2010
Organiser - Laughter and other Non-Verbal Vocalisations in Speech - Long Room Hub - International Workshop Oct 2012
PC member - LREC2012 - The 8th International Conference on Language Resources and Evaluation May 2012
General Chair 7th Speech Prosody International Conference Dublin, Ireland May 2014
Co-organiser MA3HMI, Singapore Sep 2014
Co-Organiser - 3rd European Symposium on Multimodal Communication, MMSYM-2015, Dublin 17,18 Sept 2015
Co-Organiser - Engagement in Social Intelligent Virtual Agents - satellite w/s of Fifteenth International Conference on Intelligent Virtual Agents (IVA 2015), August 26-28, Delft Aug 2015
PC member BAI 2012 May 2015
PC member - CICLing 2013, 2014, 2015, 2016, 2017 Apr 2015
PC member - IWSDS 2012, 2014, 2015 Apr 2015
PC member - LTC 2009, 2011, 2013, 2015 Jun 2015
Co-Organiser/Local Host - MMSYM 2015 Jun 2015
PC member - SPECOM 2015, 2016, 2017 Jun 2015
PC member - CogInfoCom 2012, 2014 Apr 2014
Co-Organiser - ESIVA - satellite of Interspeech 2015 May 2015
PC member HSCR 2017
PC member SP8 2016 - Speech Prosody, Boston
PC member SLSP 2014, 2016
PC member AICS 2017
PC member ICMR 2017
PC member ICCT 2017
PC member MMC 2013, 2017
PC member SSW8
PC member Oriental COCOSDA 2014, 2015, 2016, 2017
PC member MMSYM 2016
PC member ACII 2013, 2015
PC member IVA 2008, 2017
PC member WASA 2015
PC member Poital 2014
Language Skill Reading Skill Writing Skill Speaking
Arabic Basic Basic Basic
English Fluent Fluent Fluent
French Medium Medium Basic
Irish Basic Basic Basic
Japanese Fluent Medium Fluent
Details Date From Date To
International Phonetic Association, Coordinating Committee on Speech I/O Database Assessment, International Committee of Acoustic Society of Japan, International Speech Communication Association Institute of Acoustics (adherent) U.K., Acoustic Society of America, Acoustic Society of Japan. IEEE Signal Processing Society
Haider, Fasih and Koutsombogera, Maria and Conlan, Owen and Vogel, Carl and Campbell, Nick and Luz, Saturnino, An Active Data Representation of Videos for Automatic Scoring of Oral Presentation Delivery Skills and Feedback Generation, Frontiers in Computer Science, 2, 2020, p1 , Journal Article, PUBLISHED  TARA - Full Text  DOI
Haider, F., Akira, H., Vogel, C., Campbell, N., Luz, S. , 2 Analysing patterns of right brain-hemisphere activity prior to speech articulation for identification of system-directed speech, Speech Communication, 18, (25), 2019, p1- , Journal Article, PUBLISHED  TARA - Full Text  DOI
Watching People Talk; How Machines Can Know We Understand Them - a Study of Engagement in a Conversational Corpus in, editor(s)Prof. Dr. Laszlo Hunyadi , The temporal patterns of multimodal communication, Springer, 2018, [Nick Campbell], Book Chapter, SUBMITTED
Redefining Concatenative Speech Synthesis for use in Spontaneous Conversational Dialogues; a Study with the GBO Corpus in, editor(s)Maria Gosy , CAPSS, Budapest, Hungary, 2018, [Nick Campbell], Book Chapter, APPROVED
Utilisation of Linguistic and Paralinguistic Features for Academic Presentation Summarisation in, editor(s)HUCAPP Secretariat , HUCAPP 2018, Springer, 2018, [Keith Curtis, Gareth J. F. Jones and Nick Campbell ], Book Chapter, IN_PRESS
Fasih Haider, Hayakawa Akira, Saturnino Luz, Carl Vogel, Nick Campbell, On-Talk and Off-Talk Detection: A Discrete Wavelet Transform Analysis of Electroencephalogram, ICASSP, Montreal, Canada, Thursday, April 19, edited by IEEE , 2018, Notes: [BISP-P3.3], Conference Paper, PUBLISHED
Volha Petukhova ,Andrei Malchanau ,Youssef Oualil ,Dietrich Klakow ,Saturnino Luz ,Fasih Haider ,Nick Campbell ,Dimitris Koryzis ,Dimitris Spiliotopoulos ,Pierre Albert ,Nicklas Linz and Jan Alexandersson, The Metalogue Debate Trainee Corpus: Data Collection and Annotations, LREC, Miyazaki, Japan, May 7-12 , edited by ELRA , 2018, Notes: [@InProceedings{PETUKHOVA18.186, author = {Volha Petukhova ,Andrei Malchanau ,Youssef Oualil ,Dietrich Klakow ,Saturnino Luz ,Fasih Haider ,Nick Campbell ,Dimitris Koryzis ,Dimitris Spiliotopoulos ,Pierre Albert ,Nicklas Linz and Jan Alexandersson}, title = {The Metalogue Debate Trainee Corpus: Data Collection and Annotations}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year = {2018}, month = {may}, date = {7-12}, location = {Miyazaki, Japan}, editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga}, publisher = {European Language Resources Association (ELRA)}, address = {Paris, France}, isbn = {979-10-95546-00-9}, language = {english} }], Conference Paper, PUBLISHED
Emer Gilmartin ,Christian Saam ,Brendan Spillane ,Maria O'Reilly ,Ketong Su ,Arturo Calvo ,Loredana Cerrato ,Killian Levacher ,Nick Campbell and Vincent Wade, The ADELE Corpus of Dyadic Social Text Conversations:Dialog Act Annotation with ISO 24617-2, LREC, Miyazaki, Japan, 7-12 May 2018, European Language Resources Association (ELRA), 2018, Conference Paper, PUBLISHED  TARA - Full Text  URL
Emer Gilmartin ,Carl Vogel and Nick Campbell, Chats and Chunks: Annotation and Analysis of Multiparty Long Casual Conversations, European Language Resources Association (ELRA), 2018, Notes: [@InProceedings{GILMARTIN18.653, author = {Emer Gilmartin ,Carl Vogel and Nick Campbell}, title = {Chats and Chunks: Annotation and Analysis of Multiparty Long Casual Conversations}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year = {2018}, month = {may}, date = {7-12}, location = {Miyazaki, Japan}, editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga}, publisher = {European Language Resources Association (ELRA)}, address = {Paris, France}, isbn = {979-10-95546-00-9}, language = {english} }], Conference Paper, PUBLISHED
Keith Curtis ,Nick Campbell and Gareth Jones, Development of an Annotated Multimodal Dataset for the Investigation of Classification and Summarisation of Presentations using High-Level Paralinguistic Features, LREC, Miyazaki, Japan, ELRA, 2018, Notes: [@InProceedings{CURTIS18.800, author = {Keith Curtis ,Nick Campbell and Gareth Jones}, title = {Development of an Annotated Multimodal Dataset for the Investigation of Classification and Summarisation of Presentations using High-Level Paralinguistic Features}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year = {2018}, month = {may}, date = {7-12}, location = {Miyazaki, Japan}, editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga}, publisher = {European Language Resources Association (ELRA)}, address = {Paris, France}, isbn = {979-10-95546-00-9}, language = {english} }], Conference Paper, PUBLISHED
  

Page 1 of 20
Nick Campbell, Watching people talk - spoken interaction between human and machines, Advanced Lecture Series in Pattern Recognition, Beijing, China, May 22, 2018, Chinese Academy of Science, Invited Talk, PUBLISHED
Nick Campbell, Corpora of Spoken Interaction - past, present, and future?, Institute of Phonetics Lecture Series, Beijing, China, May 24th, 2018, Chinese Academy of Social Sciences, Invited Talk, PUBLISHED
Nick Campbell, Paralinguistic & non-verbal information in spoken dialogue system processing, Keihanna Interlab Forum, Kyoto, Japan, June 27, 2018, International Institute for Advanced Studies, Invited Talk, IN_PRESS
nick campbell, Multimodal Agents for Aging and Multicultural Societies, Shonan Meeting on Multimodal Agents for Aging and Multicultural Societies, Tokyo, Japan, Oct 26-Nov 1st, 2018, NII, Notes: [invited Keynote talk], Invited Talk, PUBLISHED
I Calixto, Q Liu, N Campbell , Multilingual Multi-modal Embeddings for Natural Language Processing , arXiv preprint arXiv:1702.0110, 2017, Conference Paper, PUBLISHED
Nick Campbell, Towards Interactive Speech Synthesis; an example of robot-human dialogues in a spontaneous environment, CAPSS , Budapest, 15 May, 2017, Hungarian Academy of Science, Invited Talk, PUBLISHED
Nick Campbell, Corpora of Spoken Interaction; Watching People Talk, Oriental COCOSDA 2017, Seoul, Korea, Nov 2nd, 2017, Notes: [http://www.ococosda2017.org], Invited Talk, PUBLISHED
Nick Campbell, Synthesising colloquial speech with the GBO Corpus, Kaken Onseigengo, Kyoto, Japan, July 15, 2017, Invited Talk, PUBLISHED
Nick Campbell, Specom , Budappest, Hungary, 2016, Invited Talk, PUBLISHED
nick campbell, An introduction to the TCD D-ANS Corpus - a multimodal multimedia monolingual biometric corpus of spoken social interaction !, Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction, Singapore, 14 sept , 2014, ISCA Satellite Workshop, Notes: [invited keynote], Invited Talk, PUBLISHED

  


Page 1 of 3
Award Date
Fellow of Trinity College Dublin 2010
Track Leader (SFI CSET-CNGL) ILT August 2011
Kaken (Japanese Govt funding) 3yrs August 2012
Theme Leader CNGL-ii 2013
Founding Co-PI ADAPT Centre 2014
Theme Leader ADAPT Centre 2015
ISCA (Japan Collaboration) 2014
GoogleScholar h-index: 41(23), i-10: 144 (68) May 2018
My background is in experimental psychology and linguistics, but most of my experience is in speech technology. I prefer corpus-based approaches and have pioneered advanced (and paradigm-shifting) methods of speech synthesis and natural conversational speech collection in a multimodal environment. My principal interest is in speech prosody, extending this research to social interaction to show how the voice is used in discourse to express personal relations as well as propositional content. Most of my previous work has used speech materials collected in Japan, and I am happy now to be in Ireland where I can confirm the universality of my previous findings - both for Irish and for Hiberno-English. Ultimately, I am working to produce a friendlier speech-based human-machine interface for web-based information, customer-services, games, and robotics, while trying to understand how humans perform such often perfect communication.