Skip Trinity Banner Navigation

Skip to main content »

Trinity College Dublin

Personal Information
College Photo Name Campbell, Nick
Main Department C.L.C.S.
College Title Professor
College Tel +353 1 896 4370
Fax +353 1 896 2941
Notes lab phone: +353 1254 2749, web:, mail:
Nick Campbell (nick@tcd,ie) is SFI Stokes Professor of Speech & Communication Technology at Trinity College Dublin (The University of Dublin) in Ireland. He received his Ph.D. degree in Experimental Psychology from the University of Sussex in the U.K., and was previously engaged at the Japanese National Institute of Information and Communications Technology, (as and as Chief Researcher in the Department of Acoustics and Speech Research, Advanced Telecommunications Research Institute International (as, Kyoto, Japan, where he also served as Research Director for the JST/CREST Expressive Speech Processing and the SCOPE “Robot’s Ears” projects. He was first invited as a Research Fellow at the IBM U.K. Scientific Centre, where he developed algorithms for speech synthesis, and later at the AT&T Bell Laboratories, where he worked on the synthesis of Japanese. He served as Senior Linguist at the Edinburgh University Centre for Speech Technology Research before joining ATR in 1990. His research interests are based on large speech databases, and include nonverbal speech processing, concatenative speech synthesis, and prosodic information modeling. He spends his spare time working with postgraduate students as Visiting Professor at the School of Information Science, Nara Institute of Science and Technology (NAIST), Nara, Japan, and was also Visiting Professor at Kobe University, Kobe, Japan for 10 years.
Details Date
Board member: International Speech Communication Association 2009-present
Steering Committee member - Digital Humanities Research (TCD) August 2012
Member, Spoken Language Technical Committee, IEEE Signal Processing Society Oct 2011 - present
Board Member - European Language Resources Association (ELRA) Nov 2010
Vice President - European Language Resources Association Feb 2011 -
More Representations>>>
Membership of Professional Institutions, Associations, Societies
Details Date From Date To
International Phonetic Association, Coordinating Committee on Speech I/O Database Assessment, International Committee of Acoustic Society of Japan, International Speech Communication Association Institute of Acoustics (adherent) U.K., Acoustic Society of America, Acoustic Society of Japan. IEEE Signal Processing Society
Awards and Honours
Award Date
Fellow of Trinity College Dublin 2010
Track Leader (SFI CSET-CNGL) ILT August 2011
Kaken (Japanese Govt funding) 3yrs August 2012
Theme Leader CNGL-ii 2013
Language Skill Reading Skill Writing Skill Speaking
Arabic Basic Basic Basic
English Fluent Fluent Fluent
French Medium Medium Basic
Irish Basic Basic Basic
Japanese Fluent Medium Fluent
Description of Research Interests
My background is in experimental psychology and linguistics, but most of my experience is in speech technology. I prefer corpus-based approaches and have pioneered advanced (and paradigm-shifting) methods of speech synthesis and natural conversational speech collection in a multimodal environment. My principal interest is in speech prosody, extending this research to social interaction to show how the voice is used in discourse to express personal relations as well as propositional content. Most of my previous work has used speech materials collected in Japan, and I am happy now to be in Ireland where I can confirm the universality of my previous findings - both for Irish and for Hiberno-English. Ultimately, I am working to produce a friendlier speech-based human-machine interface for web-based information, customer-services, games, and robotics, while trying to understand how humans perform such often perfect communication.
Research Interests
Cognition Communication Sciences Communication engineering, technology Computational linguistics
Computer Science/Engineering Databases, database management, data mining Discourse & Dialogue Human computer interactions
Information Technology Intelligent agents Language and technology Multimedia
Numerical analysis Remote sensing SPEECH PROSODY Signal processing
Speech Processing Speech processing/technology Speech synthesis Virtual Reality
Research Projects
Project title FastNet
Summary Focus on Actions in Social Talk; Network-Enabling Technology ( €1,232,002.90 over 5 years)
Funding Agency SFI
Programme PI
Type of Project speech technology & corpus development
Date from April 1st 2010
Date to March 31st 2015
Person Months 60

Project title CNGL-II
Summary Theme Leader - Delivery & Interaction 10meuro (approx) for 30 months
Funding Agency SFI
Programme CSET
Type of Project Centre for Next Generation Localisation (part 2)
Date from Jan 2012
Date to Sept 2015
Person Months 30

Project title METALOGUE
Summary Multiperspective Multimodal Dialogue: dialogue system with metacognitive abilities The goal of METALOGUE is to produce a multimodal dialogue system that is able to implement an interactive behaviour that seems natural to users and is flexible enough to exploit the full potential of multimodal interaction. It will be achieved by understanding, controlling and manipulating system's own and users' cognitive processes. The new dialogue manager will incorporate a cognitive model based on metacognitive skills that will enable planning and deployment of appropriate dialogue strategies. The system will be able to monitor both its own and users' interactive performance, reason about the dialogue progress, guess the users' knowledge and intentions, and thereby adapt and regulate the dialogue behaviour over time. The metacognitive capabilities of the METALOGUE system will be based on a state-of-the-art approach incorporating multitasking and transfer of knowledge among skills. The models will be implemented in ACT-R, providing a general framework of metacognitive skills.
Funding Agency EU
Programme FP7-ICT
Type of Project STREP
Date from Nov 2013
Date to Oct 2016
Person Months 2

Project title JOKER
Summary JOKe and Empathy of a Robot/ECA: Towards social and affective relations with a robot This project will build and develop JOKER, a generic intelligent user interface providing a multimodal dialogue system with social communication skills including humor, empathy, compassion, charm, and other informal socially-oriented behavior. Talk during social interactions naturally involves the exchange of propositional content but also and perhaps more importantly the expression of interpersonal relationships, as well as displays of emotion, affect, interest, etc. This project will facilitate advanced dialogues employing complex social behaviors in order to provide a companion-machine (robot or ECA) with the skills to create and maintain a long term social relationship through verbal and non verbal language interaction. Such social interaction requires that the robot has the ability to represent and understand some complex human social behavior. It is not straightforward to design a robot with such abilities. Social interactions require social intelligence and ‘understanding’ (for planning ahead and dealing with new circumstances) and employ theory of mind for inferring the cognitive states of another person. JOKER will emphasize the fusion of verbal and non-verbal channels for emotional and social behavior perception, interaction and generation capabilities. Our paradigm invokes two types of decision: intuitive (mainly based upon non-verbal multimodal cues) and cognitive (based upon fusion of semantic and contextual information with non-verbal multimodal cues.) The intuitive type will be used dynamically in the interaction at the non-verbal level (empathic behavior: synchrony of mimics such as smile, nods) but also at verbal levels for reflex small- talk (politeness behavior: verbal synchrony with hello, how are you, thanks, etc). Cognitive decisions will be used for reasoning on the strategy of the dialog and deciding more complex social behaviors (humor, compassion, white lies, etc.) taking into account the user profile and contextual information.
Funding Agency EU-FP7
Programme CHISTERA
Type of Project Collaborative Research
Date from Jan 2014
Date to Dec 2016
Person Months 3

Publications and Other Research Outputs
Peer Reviewed
Matej Rojc & Nick Campbell (eds), Speech, Gaze, and Affect: concepts of reactive and natural human-machine interaction techniques employing ECAs with personality, 1, Jersey, Science Publishers, 2013
Expressive Speech Processing and Prosody Engineering: An Illustrated Essay on the Fragmented Nature of Real Interactive Speech in, editor(s)FangChen ·KristiinaJokinen , Speech Technology Theory and Applications, New York Dordrech tHeidelberg London, Springer, 2010, pp105 - 120, [Nick Campbell]
Nick Campbell, An Audio-Visual Approach to Measuring Discourse Synchrony in Multimodal Conversation Data, Interspeech 2009, Brighton, England, September 2009, edited by ISCA , 2009
Notes: [Special Session: Active Listening & Synchrony (Wed-Ses2-S1)]
TARA - Full Text  URL
Nick Campbell, The expanding role of prosody in speech communication technology, DIAHOLMIA, KTH, Sweden, June 2009, 2009
Notes: [ Keynote Talk]
Nick Campbell, Tracking the second channel of information in speech, IEEE International workshop on Social Signal Processing, Amsterdam, September , 2009
Notes: [Keynote Talk]
More Publications and Other Research Outputs >>>

Log in to the TCD Portal
Last Updated:23-SEP-2014