Technoscientific Research: Methodological and Ethical Aspects 9783110584066, 9783110583908

From the content: Introduction Mathematical modelling Measurement Scientific explanation Context of disco

237 79 3MB

English Pages 516 Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Technoscientific Research: Methodological and Ethical Aspects
 9783110584066, 9783110583908

Table of contents :
Preface
Contents
1. Introduction
2. Selected concepts of logic and philosophy
3. Science in historical perspective
4. Philosophy of science in historical perspective
5. Mathematical modelling
6. Measurement
7. Scientific explanation
8. Context of discovery
9. Context of justification
10. Uncertainty of scientific knowledge
11. Basic concepts of Western ethics
12. Western ethics in historical perspective
13. System of values associated with science
14. Principles of moral decision-making
15. General issues of research ethics
16. Ethical aspects of experimentation
17. Ethical aspects of information processing
18. Legal protection of intellectual property
19. Ethical issues implied by information technologies
20. Concluding remarks
Appendix: Milestones in the history of science
References
Index of Names
Index of Terms

Citation preview

Roman Z. Morawski Technoscientific Research

Also of interest Human Forces in Engineering A. Atrens,  ISBN ----, e-ISBN (PDF) ----, e-ISBN (EPUB) ----

Philosophy of Mathematics T. Bedürftig, R. Murawski,  ISBN ----, e-ISBN (PDF) ----, e-ISBN (EPUB) ----

The Science of Innovation K. Löhr,  ISBN ----, e-ISBN (PDF) ----, e-ISBN (EPUB) ----

Entrepreneurship for Engineers H. Kohlert, D. Fadai, H. Sachs,  ISBN ----, e-ISBN (PDF) ----

Roman Z. Morawski

Technoscientific Research Methodological and Ethical Aspects

Author Roman Z. Morawski, Ph.D., D.Sc., Professor of Measurement Science Warsaw University of Technology, Faculty of Electronics & Information Technology Institute of Radioelectronics & Multimedia Technology ul. Nowowiejska 15/19, 00-665 Warszawa, Poland [email protected]

ISBN 978-3-11-058390-8 e-ISBN (PDF) 978-3-11-058406-6 e-ISBN (EPUB) 978-3-11-058412-7 Library of Congress Control Number: 2019930905 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2019 Walter de Gruyter GmbH, Berlin/Boston Typesetting: Integra Software Services Pvt. Ltd. Printing and binding: CPI books GmbH, Leck Cover image: Gremlin / iStock / Getty Images www.degruyter.com

Preface This book contains a synthesis of my methodological and ethical experience accumulated over half a century of academic work in the field of measurement science and its applications in electrical and computer engineering, in physics and chemistry, in biology and medicine, as well as in natural environment protection and food technology. This synthesis has been accomplished during the last decade due to the needs resulting from my responsibility for the courses on methodology and ethics of research, addressed to graduate students of Warsaw University of Technology, and due to more systematic confrontation with the literature of the subject, mainly in English and Polish, but also in German, French, Spanish and Italian. I have cited that literature in this book not only to support my own observations and conclusions, but also to give credit to the authors who actually have influenced my understanding of the subject-matter. The content of Chapters 11–19 of this book is largely based on the material provided in its Polish “prototype” constrained to ethical aspects of research (Etyczne aspekty działalności badawczej w naukach empirycznych), which was published by the University of Warsaw in 2011. A vast majority of authors of the texts on the methodology and ethics of technoscientific research are philosophers of science and ethicists. So, proposing one more volume on this subject, without having any formal background in philosophy, has required of me some audacity and strong motivation. I have undertaken this task being convinced that both research methodology and research ethics are actually interdisciplines that combine philosophy with science and technology, and therefore both philosophy and research practice are their natural sources of inspiration. As a rule, philosophers writing for philosophers are inclined to ignore practical issues of research, as being trivial, and to focus on controversies, as being intellectually more attractive. My teaching experience, related to the classes for graduate students of engineering and technology, has made me realise that there is a need for a less sophisticated and more practical approach to the subject-matter. This is the rationale behind the concept of a handbook “written by a research practitioner for research practitioners” which does not focus on the overgrown terminology of philosophy of science and research ethics – on classification of the panoply of stances promoted by various schools of thinking – but offers guidance in the everyday activities aimed at generation and evaluation of knowledge. This book is also distinctive by an integrative approach to methodological and ethical issues related to research practice, by a strong emphasis on the issues related to mathematical modelling and measurement, as well as by attempted application of engineering design methodology to moral decision-making. This book is neither a philosophical treaty nor a quick reference guide for Ph.D. students in a desperate need to make their theses satisfy minimum formal https://doi.org/10.1515/9783110584066-201

VI

Preface

requirements being in force in their domain of study. It is intended to be a bridge between research practice and philosophy of science – not only encouraging to deeper reflection, but also providing some practical advice. My teaching and research experience has shown that young researchers unwillingly enter into a philosophical discourse because they consider it as counterproductive. On the one hand, they are convinced that moral reasons are irreconcilable with business reasons, even with the pursuit of material success in general; on the other hand, they are discouraged by the uncertainty and unambiguity of philosophical conclusions in any non-trivial matter. For those who have experienced the intellectual comfort of solving mathematical problems or problems of engineering design, this is actually a very frustrating situation. In mathematics, problems which do not have solutions or have ambiguous solutions are called ill-posed problems. This concept, when generalised on non-mathematical problems, applies to the majority of real-world methodological and ethical dilemmas. When trying to resolve them, one can never be sure that all the arguments in favour of alternative solutions and against them have been identified, and what criteria for weighing the available arguments are most appropriate. There are, however, serious practical reasons motivating us to deal with this kind of intellectual inconvenience; in particular: – though wrong decisions cannot be completely avoided, we may improve their overall quality at least from the point of view of the consequences that they entail; – by realising various aspects of our research activities, we can positively influence the formation of our methodological and ethical intuitions that guide our automatic behaviours. The readers of this book are not expected to have any philosophical preparation. They are guided through the sources of relevant background information, very diversified in terms of the language and content and in terms of the level of advancement; a mild preference is given to the textbooks in which experts have already synthesised information from primary sources. This book can be a teaching aid for students attending classes (both lectures and tutorials) which are intended to broaden general philosophical knowledge, to identify methodological and ethical issues related to conducting scientific research, and to outline the methodology for analysing dilemmas arising in this context. This book does not provide ready-for-use solutions to difficult problems or easy strategies for solving them. Therefore, the researchers reading it will probably not experience any short-term benefits, but they have a chance to gain in a longer perspective when they discover how abstract methodological and ethical recommendations, formulated in the consecutive chapters of this book, apply to specific situations of research reality.

Preface

VII

In conclusion, I would like to thank my colleagues from Warsaw University of Technology, who contributed to the final shape of this book: Dr. Andrzej Miękina, for formatting the manuscript, and Dr. Paweł Mazurek, for proofreading it. I feel also indebted to the students who attended my lectures on methodological and ethical aspects of research, and shared with me their opinions and suggestions. Warsaw – Jurata, 2016–2018

Disclaimer: The masculine form of the gender-specific third-person pronouns (he, him, his, his, himself) is consistently used throughout this book whenever they refer to an indefinite person. This convention has been assumed to avoid awkward slash constructions (he/she, him/her, his/ her, his/hers, himself/herself) or alternated use of masculine and feminine pronouns. It should be always interpreted as inclusive with respect to the persons of both sexes.

Contents Preface

V

1 1.1 1.2 1.3

Introduction 1 Science, technology and technoscience 1 Contents and objectives of the book 3 Profile, style and structure of the book 5

2 2.1 2.2 2.2.1 2.2.2 2.2.3 2.2.4 2.3 2.3.1 2.3.2 2.4 2.4.1 2.4.2 2.4.3

Selected concepts of logic and philosophy 9 Typology of definitions 9 Elements of formal logic 12 Statements and arguments 12 Elementary rules of deductive inference 13 Advanced rules of deductive inference 15 Rules of inductive and abductive inference 16 Definition and structure of philosophy 17 Definition of philosophy 17 Structure of philosophy 18 Key concepts of epistemology 19 Truth 19 Knowledge 22 Scientific laws and theories 23

3 3.1 3.1.1 3.1.2 3.1.3 3.1.4 3.1.5 3.2 3.2.1 3.2.2 3.2.3 3.2.4 3.3

Science in historical perspective 25 Protoscience 25 Ancient Egypt and Mesopotamia 25 Ancient Greece 27 Roman Empire 29 Arab empire 29 Medieval Europe 30 Classical science 30 From Copernicus to Newton 30 From natural philosophy to sciences 32 From theory to practice 34 From steady growth to revolution 36 Technoscience 37

4 4.1 4.2 4.3

Philosophy of science in historical perspective 41 Philosophy of protoscience 41 Philosophy of classical science 42 Philosophy of science in the first half of the twentieth century

46

X

4.4 4.5 4.5.1 4.5.2 4.5.3 4.5.4 4.5.5

Contents

Philosophy of science in the second half of the twentieth century 49 Recent trends in philosophy of science 54 Neo-pragmatism 54 Naturalism 55 Scientific realism 56 Bayesianism 59 Specialisation 61

5 5.1 5.2 5.3 5.4 5.5 5.5.1 5.5.2

Mathematical modelling 63 Models in science 63 Methodology of mathematical modelling 66 Typology and examples of mathematical models 71 Computational modelling 75 Cognitive status of mathematical modelling 79 Adequacy and accuracy of mathematical models 79 Mathematical models and instrumentalism 83

6 6.1 6.2 6.3 6.3.1 6.3.2 6.3.3 6.4 6.4.1 6.4.2 6.4.3

Measurement 89 Basic concepts of measurement science 89 System of quantities and measurement units 95 Mathematical meta-model of measurement 98 Development step #1 99 Development step #2 103 Development step #3 104 Interpretation of key concepts of measurement science in terms of mathematical modelling 108 Measurand 108 Calibration 110 Measurement uncertainty 111

7 7.1 7.2 7.3 7.4 7.5 7.6

Scientific explanation 115 Institutional aims of science 115 Scientific explanation versus scientific prediction Nomological explanation 120 Causal explanation 126 Computer-aided explanation 136 Explanatory pluralism 144

8 8.1 8.2

Context of discovery 149 Discovery versus invention 149 Theoretical aspects of technoscientific creativity

115

150

XI

Contents

8.3 8.4 8.5 9 9.1 9.1.1 9.1.2 9.2 9.3 9.3.1 9.3.2 9.3.3 9.4 9.4.1 9.4.2 9.4.3

Practical aspects of technoscientific creativity Generation of scientific hypotheses 159 Preselection of scientific hypotheses 160

155

Context of justification 165 Preliminary considerations 165 Basic concepts 165 Introductory example 167 Underdetermination and Duhem–Quine thesis 170 Transformation of hypotheses into scientific knowledge Inference to best explanation 173 Scientific evidence and confirmation 176 Bayesian confirmation 178 Research methods and research methodology 182 Basic concepts 182 Criteria of good research 183 Contents of research methodology 184

10 Uncertainty of scientific knowledge 187 10.1 Preliminary considerations 187 10.2 Critical versus naïve understanding of scientific method 10.3 Demarcation problem 189 10.4 Uncertainty of scientific reasoning 193 10.5 Uncertainty of observation 203 10.6 Measurement uncertainty 204 10.6.1 Basic methods for evaluation of measurement uncertainty 10.6.2 Advanced methods for evaluation of measurement uncertainty 211 10.7 Mitigation of uncertainty of scientific knowledge 215 11 Basic concepts of Western ethics 219 11.1 Motivations and objectives 220 11.2 Elements of metaethics and general ethics 224 11.2.1 Good, value, morality 225 11.2.2 Ethics and metaethics 227 11.2.3 Monistic approaches to ethics 230 11.2.4 Moral responsibility 233 11.3 Relation of ethics to humanities, sciences, law and religion 11.3.1 Relation of ethics to other philosophical disciplines 238 11.3.2 Relation of ethics to sciences 240 11.3.3 Relation of ethics to law and religion 242

172

188

205

238

XII

Contents

12 Western ethics in historical perspective 245 12.1 Ancient and medieval concepts of ethics 245 12.1.1 Ancient code-based ethics 245 12.1.2 Ethical intellectualism of Socrates 246 12.1.3 Eudaimonism of Plato 247 12.1.4 Virtue ethics of Aristotle 248 12.1.5 Moralism of stoics 249 12.1.6 Christian ethics 250 12.1.7 Ethics of natural law 251 12.2 Modern concepts of ethics 252 12.2.1 Ethics of human rights 252 12.2.2 Moral sentimentalism of David Hume 253 12.2.3 Formal ethics of Immanuel Kant 253 12.2.4 Utilitarian ethics 255 12.2.5 Material ethics of value 257 12.2.6 Ethics of discourse 258 12.2.7 Ethics of justice 259 12.2.8 Weltethos or universal ethics 261 13 System of values associated with science 263 13.1 Preliminary considerations 263 13.2 Typology of values 264 13.2.1 General characteristics of values 264 13.2.2 Epistemic and utilitarian values 267 13.2.3 Ethical and social values 269 13.2.4 Trust in science 273 13.3 Conflicts of values 276 14 Principles of moral decision-making 283 14.1 Moral dilemma 283 14.2 Selected patterns of ethical thinking 288 14.2.1 General scheme of moral decision-making 288 14.2.2 Selected tools of ethical thinking 292 14.3 Decision theory versus ethical thinking 294 15 General issues of research ethics 305 15.1 Metaethical assumptions 305 15.2 Typology and aetiology of research misconduct 15.2.1 Typology of research misconduct 308 15.2.2 Aetiology of research misconduct 310 15.3 Evolution of research ethics 313 15.4 Choice of a research problem 315

308

XIII

Contents

15.5 15.5.1 15.5.2

Choice of research methodology 320 Ethical background of research methodology Laboratory notebook 322

320

16 Ethical aspects of experimentation 325 16.1 Typology of experiment-related misconduct 325 16.2 Experimentation with involvement of humans and animals 16.2.1 Experimentation with involvement of humans 326 16.2.2 Experimentation with involvement of animals 330 16.3 Acquisition and processing of experimental data 332 16.3.1 Fabrication, falsification and theft of data 332 16.3.2 Intentional misinterpretation of data 335 16.4 Technical infrastructure of experimentation 338 16.4.1 Engineering aspects of experimentation 338 16.4.2 Safety aspects of experimentation 340 17 Ethical aspects of information processing 343 17.1 Preliminary considerations 343 17.2 Scientific discussion 345 17.2.1 Principles of rational discussion 346 17.2.2 Fallacious argumentation 348 17.2.3 Eristic or art of being right 351 17.3 Publication of research results 353 17.3.1 Content and form of scientific publications 353 17.3.2 Authorship of scientific publications 358 17.3.3 Publication policy and its ethical implications 363 17.4 Reviewing process 367 17.4.1 General principles 367 17.4.2 Editorial practices 368 17.4.3 Decline of scientific criticism 371 17.5 Research grant applications 374 18 Legal protection of intellectual property 377 18.1 Basic concepts related to intellectual property 18.2 Legal protection of author’s rights 380 18.2.1 Subject and owner of author’s rights 380 18.2.2 Moral versus economic author’s rights 382 18.2.3 Scope of personal permissible use 384 18.2.4 Citation rules 385 18.3 Legal protection of inventor’s rights 389 18.3.1 Subject of inventor’s rights 389 18.3.2 Moral and economic inventor’s rights 392

377

326

XIV

Contents

Patenting procedure 393 Limits of inventor’s rights 394 Justification of patent system 396 Critical analysis of legal protection of economic author’s and inventor’s rights 397 General philosophical argumentation 398 Argumentation referring to differences between material and intellectual property 400 Critical analysis of copyright 402 Critical analysis of patent system 403 Future of legal protection of intellectual property 406

18.3.3 18.3.4 18.3.5 18.4 18.4.1 18.4.2 18.4.3 18.4.4 18.5 19 19.1 19.2

Ethical issues implied by information technologies 415 Information technology in the age of globalisation 415 Overview of ethical issues related to use of information technology 417 19.3 Information technology in research practice 422 19.4 Netiquette or internet ethics 427 19.4.1 Netiquette rules 428 19.4.2 Netiquette versus ethics of journalism 431 20 20.1 20.2

Concluding remarks 435 Evolution of research methodology and research ethics Education in research methodology and research ethics

Appendix: Milestones in the history of science References

463

Index of Names

483

Index of Terms

491

443

435 439

1 Introduction 1.1 Science, technology and technoscience Science1 is usually understood as a systematic enterprise aimed at generating and organising knowledge in the form of testable explanations and predictions about the natural world which comprises all the components of the physical universe (atoms, ecosystems, people, societies, galaxies, etc.), as well as the natural forces working on those components. There is no generally agreed list of the characteristics of science; however, some of its basic features can be enumerated as follows: – science possesses a specific language (with terms whose meaning and reference are precisely defined); – science has cognitive aims whose realisation is articulated in scientific theories with a well-patterned internal structure which is always open to change; – in principle, scientific knowledge is subject to a more rigorous scrutiny than knowledge of any other kind; – scientific activity follows a method (called scientific method) when generating knowledge and seeking to ensure its truthlikeness; – scientific activity is a social activity whose nature is different from other activities in assumptions, contents and limits; – scientific activity is a free human activity, and therefore can be subject to evaluation referring to epistemic, pragmatic and ethical criteria.2 Science may be viewed as just one out of many human systems of beliefs (including the system of Voodoo magic) with no justified claim to any special epistemic status; however, the predictive success of mature scientific theories, observed both in laboratories and in everyday practice, is so incongruous with this view that it cannot be taken seriously3. Science itself may be a subject of research: science studies and philosophy of science are two most important meta-scientific research areas dealing with science from various perspectives. The first of them is an interdisciplinary research area whose aim is to analyse scientific expertise in broad social, historical and philosophical contexts, i.e., to analyse the production, representation and reception of

1 from Latin scientia = “knowledge”. 2 W. J. Gonzalez, “Novelty and Continuity in Philosophy and Methodology of Science”, [in] Contemporary Perspectives in Philosophy and Methodology of Science (Eds. W. J. Gonzalez, J. Alcolea), Netbiblo, S.L., 2006, pp. 1–28. 3 J. Worrall, “Philosophy of Science: Classic Debates, Standard Problems, Future Prospects”, [in] The Blackwell Guide to the Philosophy of Science (Eds. P. Machamer, M. Silberstein), Blackwell, Malden (USA) – Oxford (UK) 2002, pp. 18–36. https://doi.org/10.1515/9783110584066-001

2

1 Introduction

scientific knowledge, as well as its epistemic and semiotic role4. The second of them, philosophy of science, is a philosophical discipline concerned with the foundations, methods and implications of science, i.e. the discipline trying to answer questions about the definition of science, the reliability of scientific knowledge and the ultimate purpose of science. The branches of science distincted according to the subject of study are called sciences, and are usually classified into three groups: – natural sciences aiming the study of natural phenomena (e.g. astronomy, physics, chemistry, biology and geology); – social sciences aiming the study of human behaviour and social patterns (e.g. anthropology, psychology, sociology and economics); – formal sciences aiming the study of abstract entities (e.g. mathematics and logic, theoretical computer science, information theory and systems theory). According to the Anglo-Saxon tradition, humanities do not count as sciences. So, the study of various artefacts of human culture and various cultures themselves – philosophy and the study of languages, literature, religion, arts, history, law, etc. – remain beyond science. The corresponding terms in some other European languages – e.g. Wissenschaft in German or nauka in Polish – have a broader meaning since they encompass all the academic disciplines, including humanities. The sciences whose explicit goal is to generate knowledge and support understanding, regardless of potential applications, are called pure sciences or basic sciences. The sciences whose explicit goal is to solve practical problems, e.g. to develop new technologies, are called applied sciences. The boundary between pure and applied sciences is rather blurred: the research findings in pure sciences often have useful applications, and intentionally applied research may contribute to better understanding of the natural world. Applied sciences today play an important role as a bridge between pure sciences and technology5 usually understood as the making, modification and usage of tools, machines, techniques or systems – aimed at solving practical problems or improving existing solutions to those problems. Science and technology frequently contribute to one another: scientific advances lead to the development of new technologies, and new technologies broaden the experimental potential of science, enabling advancement of research. This is a substantiation behind introduction of the concept of technoscience addressing the integration of science and technology, which has been progressing from the beginning of the twentieth century (cf. Section 3.3. for more details).

4 for more details, cf. the Wikipedia article “Science Studies” available at https://en.wikipedia.org/ wiki/Science_studies [2018-05-20]. 5 from Greek techne = “art” or “skill” + logos = “word” or “thought”.

1.2 Contents and objectives of the book

3

The classification of technoscientific knowledge, and the corresponding classification of technoscientific disciplines has been evolving since the times of Aristotle (384–322 BC) – the ancient Greek philosopher who made the first systematic attempt to distinguish, within philosophy, a metaphysical representation of reality from the clusters of more specific knowledge that would be today recognised as astronomy, physics, geology, biology and psychology. Various recent classifications of sciences may be found in universal encyclopaedias, e.g. in Wikipedia – a classification which is consistent with the taxonomy of sciences, provided in this section6. In managerial and administrative practice, however, a classification of technoscientific disciplines – put forward by the Organisation for Economic Cooperation and Development (OECD) in 2002 and revised in 2007 – seems to be more favoured. This classification (cf. Table 1.1) is logically deficient in various ways: the inclusion of mathematics and information sciences into natural sciences, the exposition of dairy science in the absence of food science and the separation of ethics from philosophy – these are just a few examples. It is, however, better mapping the structure of economy.

1.2 Contents and objectives of the book In this book, an attempt is made to present, under a common umbrella, selected methodological and ethical aspects of technoscientific research. It is motivated by the conviction that these two aspects are so closely interwoven that their treatment in isolation must expose both the writer and the reader to unsurmountable difficulties. After all, compliance with methodological standards and requirements is an ethical imperative, and research honesty and reliability – a methodological challenge. Philosophy and science are often perceived as quite separate enterprises, seemingly having little to do with one another: philosophy is viewed as a search for answers to abstract questions, concerning such issues as meaning of life, while science is understood as a study of more down-to-earth matters. It turns out, however, that philosophy is omnipresent in science. There are not only philosophical problems concerning foundational concepts and tools used by scientists but also philosophical issues which are directly relevant to, and directly affect, the day-today work of scientists.7 Almost until the end of the twentieth century, the development of philosophy of science had been predominantly shaped by the advancement of physics (including 6 cf. the Wikipedia article “Branches of Science” available at https://en.wikipedia.org/wiki/ Branches_of_science [2018-05-24]. 7 R. DeWitt, “Philosophies of the Sciences: A Guide”, [in] Philosophies of the Sciences, WileyBlackwell, Chichester (UK) 2010, pp. 9–37.

4

1 Introduction

Table 1.1: OECD Frascati classification of science and technology (the revised version of 2007). . Natural Sciences . Mathematics . Computer and information sciences . Physical sciences . Chemical sciences . Earth and related environmental sciences . Biological sciences . Other natural sciences

. Agricultural Sciences . Agriculture, forestry and fisheries . Animal and dairy science . Veterinary science . Agricultural biotechnology . Other agricultural sciences

. Engineering and Technology . Civil engineering . Electrical, electronic and information engineering . Mechanical engineering . Chemical engineering . Materials engineering . Medical engineering . Environmental engineering . Environmental biotechnology . Industrial biotechnology . Nanotechnology . Other engineering and technologies

. Social Sciences . Psychology . Economics and business . Educational sciences . Sociology . Law . Political science . Social and economic geography . Media and communications . Other social sciences

. Medical and health sciences . Basic medicine . Clinical medicine . Health sciences . Health biotechnology . Other medical sciences

. Humanities . History and archaeology . Languages and literature . Philosophy, ethics and religion . Art (arts, history of arts, performing arts, music) . Other humanities

Source: Revised field of science and technology (FOS) classification in the Frascati manual, OECD Working Party of National Experts on Science and Technology Indicators, February 26, 2007, http://www.oecd.org/sti/inno/38235147.pdf [2018-05-24].

astrophysics), especially by the research practices specific to that domain. Since the evolution of other scientific disciplines – in spite of the twentieth-century prophecies and twentieth-century endeavours – had not always followed the paradigms of physics, it has turned out at the end of the twentieth century that certain methodological guidelines of the philosophers of science did not match the reality of most dynamic fields, such as information sciences, life sciences and cognitive sciences. The philosophical discourse was, in particular, still focused on laws and theories, while for those sciences semantic, mathematical and computational models were more important. In this book, a cautious attempt is made to accommodate methodological needs resulting from this situation. Philosophy of science is the study of assumptions, foundations and implications of science. Most scientists have not heard of any of the philosophers of

1.3 Profile, style and structure of the book

5

science, and blindly use the scientific method, without realising how it has grown and developed. Why? The American physicist, the 1965 Nobel Prize winner, Richard P. Feynman (1918–1988) provided a simple, although not necessarily accurate, explanation: “Philosophy of science is about as useful to scientists as ornithology is to birds”8. At the same time, however, he complained that very many researchers imitate research rituals, i.e. carry out research activities in an apparently correct way, but without understanding their essence, and – consequently – without any chance for obtaining meaningful results and using them in a productive manner9. Does philosophy of science really matter for research practitioners? The following opinion on the significance and educational value of the history and philosophy of science – expressed in 1944 by another great physicist Albert Einstein (1879–1955) – cannot be passed over: . . . many people today – and even professional scientists – seem to me like somebody who has seen thousands of trees but has never seen a forest. Knowledge of the historic and philosophical background gives that kind of independence from prejudices [. . .] from which most scientists are suffering. This independence created by philosophical insight is – in my opinion – the mark of distinction between a mere artisan or specialist and a real seeker after truth.10

Since for centuries philosophy of science focused on physics and ignored other fields of experimental inquiry, the representatives of the latter might perceive philosophical recommendations as unfit to their needs and practices. It seems, however, that today – in the aftermath of the convergence of traditionally separated disciplines – they can benefit from the transfer of certain methodologies developed in physics. If so, all the scientists, beginners and mature researchers, may profit from this book by learning a conceptual basis and vocabulary of scientific discourse – the language useful for comprehensive formulation of research problems and description of research methodologies – indispensable in reviewing practice and scientific debates.

1.3 Profile, style and structure of the book This book deals with the methodological and ethical aspects of creative activities in empirical sciences, such as research planning and design of experiments, exchange of scientific and technical information, as well as documentation, publication and patenting of research results. It is basically devoted to those features of scientific

8 quoted after the Wikiquote article “Talk: Richard Feynman” available at https://en.wikiquote.org/ wiki/Talk:Richard_Feynman [2018-05-20]. 9 R. P. Feynman, “Cargo Cult Science”, Engineering and Science, June 1974, pp. 10–13. 10 quoted after the article: M. Pigliucci, “Must Science Be Testable?” Website “Aeon”, August 10, 2016, https://aeon.co/essays/the-string-theory-wars-show-us-how-science-needs-philosophy [2018-05-20].

6

1 Introduction

research that are common to all empirical sciences; the illustrative examples, however, often refer to issues specific to particular disciplines. In principle, methodological and ethical issues of management and business activities related to conducting research, as well as issues concerning the risk of introducing new technologies based on research results, remain beyond its substantive scope. Even if the latter appear in some examples, they are not subject to systematic and exhaustive analysis. Since a considerable part of the contents of this book is of philosophical or scientific nature, some readers – especially those trained in formal sciences (mathematics, information theory or systems theory) – may expect that at least subject-matter concerning the scientific method will be introduced using the so-called geometrical style of presentation. However, some other readers – especially those with educational background in social sciences – may be afraid of being confronted with such a pattern of presentation. Both groups of readers should be partially satisfied because the style of presentation, applied in this book, is kind of a hybrid, frequently met in natural sciences. The following definition of the geometrical method should help explaining the motivation behind this choice: – The geometrical method is the style of writing that was used in the Euclid’s Elements; the characteristic items of a text written in this style are definitions, axioms, postulates, propositions and deductive proofs (also called demonstrations). – The axioms are statements that everybody will admit as self-evidently true, while postulates are statements which are hypothetically claimed as long as nobody objects. – The definition of any new concept (or term) may refer only to the definitions of already defined concepts (or terms). – The proof of any new proposition may refer only to the definitions, axioms, postulates and already proven propositions.11 The geometrical style is an adequate standard for writing texts in formal sciences, but not necessarily the best choice in other disciplines. Its use has been also postulated by many philosophers, but not many of them have been able to consistently apply it in their treatises. Two examples of philosophical books written in geometrical style – Ethics (1677) by the Dutch philosopher Baruch Spinoza (1632–1677) and Tractatus Logico-Philosophicus (1921) by the Austrian philosopher Ludwig J. J. Wittgenstein (1889–1951) – show that this style may admittedly reduce the volume of a text, but – at the cost of its intelligibility. The latter is deteriorated because the complicated system of references to former proofs constantly interrupts the argument, the striving for unequivocal expressions does not allow for the use of vivid language (including illustrations, metaphors or jokes) appealing to the imagination of the reader.

11 U. Goldenbaum, “The Geometrical Method”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), http://www.iep.utm.edu/geo-meth/ [2016-09-27].

1.3 Profile, style and structure of the book

7

Some additional difficulties emerge when the geometrical method is used in philosophical books aiming the presentation of a diversified landscape of a given discipline (rather than the presentation of a selected coherent system of thoughts), i.e. in the books whose essence is in confrontation of alternative and evolving in time assumptions, approaches and methodologies. For these reasons, in this book, a more flexible iterative mode of presentation has been applied. It is assumed that the readers – first of all, graduate students of science and engineering – have intuitive understanding of the key concepts of philosophy and logic, formed in the process of their lower-level education. So, those concepts may be used without explicit definitions in some preliminary consideration (like those in this chapter) until a convenient context appears for providing their formal definitions followed by examples of application. The first formal definitions, viz. the definitions of the key concepts of philosophy and logic are presented in Chapter 2. The first opportunity for their advanced application to the subject-matter of this book appears in Chapter 4, which is devoted to a historical overview of great ideas of philosophy of science, just after an outline of the history of science, provided in Chapter 3 with the aim to bring an ostensive definition of the concept of science, and to prepare background material for examples distributed throughout the book. Chapters 5–10 are essentially devoted to methodological aspects of scientific research, and Chapters 11 and 12 – to general ethics. Chapters 12–19 are focused on ethical aspects of research, which are, however, presented in close liaison with a methodological context. Because of significant philosophical contents of this book, on the one hand, and its educational orientation, on the other, the definitions of numerous highly abstract concepts (and related terms) play an important role, especially in the chapters introducing methodological aspects of technoscientific research (Chapters 2 and 4) and its ethical aspects (Chapters 11 and 12). For the readers trained on technoscientific literature, this may seem a bit burdensome, but they will quickly discover the beneficial impact of this difficult experience when approaching more practical problems, especially those which require profound logical argumentation. As the British neuroscientist Rodrigo Q. Quiroga explained in his 2017 book12, we not only think with concepts, but we also use them for remembering our external and internal experience. This enables us to both compress memorised data and make thinking more efficient. People who remember detailed data, instead of their conceptual syntheses, may face serious difficulties with thinking . . . The famous statement “By giving the wrong name to an object, we are contributing to the misfortune of this world”13 is pointing the essence of

12 R. Q. Quiroga, The Forgetting Machine: Memory, Perception, and the Jennifer Aniston Neuron, BenBella Books, Dallas (USA) 2017. 13 This is the author’s translation of the sentence “Mal nommer un objet c’est ajouter au malheur de ce monde” which appears in: A. Camus, “Sur une philosophie de l’expression”, Poésie, 1944, Vol. 44, No. 17, p. 22.

8

1 Introduction

this problem. The definitions of many terms, provided in this book, refer to their etymological origin with the purpose to counteract the historical evolution of their meaning, accelerated by the decline of linguistic education in the Western countries, including disappearance of Greek and Latin from the curricula of primary and secondary schools. It happens, more and more frequently, that common words are changing their meanings radically over time. The common awareness of their etymological origin may slow down this process. Moreover, it can remind us about the semantic background of the corresponding terms in various languages. The English term expert, for example, is indicating a person being “experienced” or “skilful”, while the corresponding German term Gutachter is indicating a person “paying well-focused attention”. Both aspects of the expert role are important in both languages, but one may guess that the relevant semantic intuitions of the English and German native speakers may be slightly different. The originality of this book is mainly of structural nature, i.e. it differs from other books on philosophy of science and research ethics in the way the material – available in already published documents – has been structured. First of all, this book offers an integrated treatment of both methodological and ethical aspects of research. Secondly, it emphasises – more than usual – the role of mathematical modelling and measurement in scientific reasoning. Thirdly, it devotes much more attention than other sources to the uncertainty of scientific knowledge and moral decisions. Since the originality of this book is mainly of structural nature, the liaison of its contents with the sources of information is of particular importance. The sources of various categories are referred to, in particular – original research papers, M.Sc. and Ph.D. theses, review papers and monographs, handbooks and encyclopaedias, including Wikipedia. The latter is referred to as the source being most easily accessible among those offering roughly the same factual information. This approach is in line with a general recommendation which may be expressed in the following way: “Wikipedia articles should be used for background information, as a reference for correct terminology and search terms, and as a starting point for further research”14. The system of citation, applied in this book, comprises the footnotes and the alphabetic list of cited sources at the end of the book. The following rules of citation are consistently applied throughout this book: – the number of the footnote addressing the contents of a single sentence appears inside of this sentence; – the number of the footnote addressing the contents of a whole paragraph appears after the dot of the last sentence of this paragraph.

14 as stated on the website of the US National Cancer Informatics Program, located at https://nci phub.org/wiki/Special:Cite [2018-05-21].

2 Selected concepts of logic and philosophy This chapter contains an overview of the concepts of logic and philosophy, which have been selected in such a way as to facilitate introduction of methodological and ethical issues in the following chapters of the book. By no means, this overview can replace handbooks1 and encyclopaedic sources2 providing more complete coverage of the conceptual basis of logic and philosophy.

2.1 Typology of definitions Definitions play an important role in philosophy, technoscience and everyday communication. They are, in particular, essential for establishing relationships between things or ideas and their names for which purpose they are collected in handbooks, glossaries, dictionaries and other reference tools3. Given these various contexts and uses of definitions, there is, understandably, no general agreement about what a definition is, what kind of knowledge it represents and conveys, and what quality criteria it must satisfy4. This section is devoted to a typology of the definitions of concepts and terms, where a concept is understood as a general notion or idea, i.e. as a constituent of the realm of thoughts, and a term – as a word or group of words (phrase) designating (labelling) the corresponding concept, i.e. as a constituent of a natural or artificial language. The typology of the definitions, presented here, has been simplified5 in such a way as to make it fit to the practical needs of the researchers who are active in technoscience. The definition of a concept is a specification of its essential content; the definition of a term is a specification of its meaning only. Both kinds of definitions have

1 e.g. Oxford Handbooks online – Philosophy, http://www.oxfordhandbooks.com/page/philosophy [2018-07-17]; M. David, “Theories of Truth”, [in] Handbook of Epistemology (Eds. I. Niiniluoto, M. Sintonen, J. Woleński), Springer, Dordrecht 2004, pp. 331–414. 2 e.g. The Stanford Encyclopedia of Philosophy available at https://plato.stanford.edu/ [2018-0717]; Internet Encyclopedia of Philosophy available at https://www.iep.utm.edu/home/about/ [2018-07-17]. 3 J. C. Sager (Ed.), Essays on Definition, John Benjamins Pub., Amsterdam – Philadelphia 2000. 4 A. Rey, “Defining Definition”, [in] Essays on Definition (Ed. J.C. Sager), John Benjamins Pub., Amsterdam – Philadelphia 2000, pp. 1–14. 5 in comparison to more exhaustive typologies provided in encyclopaedic sources such as R. Audi (Ed.), The Cambridge Dictionary of Philosophy, Cambridge University Press, Cambridge (UK) 1999, pp. 213–215; or anthologies such as J. C. Sager (Ed.), Essays on Definition, 2000; or handbooks of logic, such as W. Marciszewski, Dictionary of Logic as Applied in the Study of Language: Concepts/ Methods/Theories, Springer, Dordrecht 1981; A. Arnauld, P. Nicole, J. V. Buroker, Logic or the Art of Thinking, Cambridge University Press, Cambridge (UK) 1996. https://doi.org/10.1515/9783110584066-002

10

2 Selected concepts of logic and philosophy

the same structure – they are composed of two parts: the word or group of words to be defined, called definiendum6, and the word or group of words or action that defines it, called definiens7. For the sake of simplicity, the typology of definitions, provided in the next paragraphs will, therefore, refer to terms only. In everyday discourse we use, quite frequently, lexical and ostensive definitions. The lexical definition of a term is that which specifies its conventional meaning in a way it is done in the dictionaries of a natural language. The ostensive definition of a term specifies it by indicating an exemplary referent, e.g. the ostensive definition of the term yellow may have the form of the statement “The term yellow applies to all colours which are similar to the colour of this spot on my tie” accompanied by the gesture of pointing to that spot. The definitions used in technoscientific discourse belong to two categories: nominal and real definitions. The nominal definition of a term is providing its linguistic meaning only, while its real definition is specifying necessary and sufficient conditions for being the kind of thing this term designates. Nominal definitions are very helpful in technoscience as they enable us to replace a long expression with a shorter one; they may be compared to black boxes in mathematical modelling of physical objects and phenomena. The most important types of nominal definitions are specified by the following sentences: – the nominal definition by etymology is tracing the linguistic origin of the term; – the nominal definition by description is describing the term; – the nominal definition by synonym is giving a word equivalent to the term; – the nominal definition by example is citing anything that will represent the term. Example 2.1: These are four nominal definitions of physics: – the term physics is derived from Greek ta physika which means “the natural things” (by etymology); – physics is a science (by description); – physics is what was called natural science three centuries ago (by synonym); – physics is involved in research of mechanical phenomena (by example).

In contrast to nominal definitions, real definitions provide deeper insight into the essence of defined terms; they may be compared to white boxes in mathematical modelling of physical objects and phenomena. The most important types of real definitions are specified by the following sentences: – the real definition by genus and specific difference, i.e. by explaining the essence of a term by considering the intelligible elements that make up the defined term;

6 from Latin neuter gerundive of definire = “to mark the limits of”. 7 from Latin present participle of definire = “to mark the limits of”.

2.1 Typology of definitions

11

– the real definition by description, i.e. by stating the genus of the term but altering the specific difference by giving the logical property which belongs to the term to be defined; – the real definition by cause, i.e. by stating the genus of the term but altering the specific difference by tracing its cause, purpose, function or origin. Example 2.2: Here are examples of the above-listed types of real definitions: – “A triangle is a polygon with three sides” exemplifies the real definition by genus and specific difference, where the noun “polygon” is genus and the expression “three sides” is a specific difference. – “A soldier is a man bestowed with the duty to defend the country” where the noun “man” is the genus and the expression “bestowed with the duty to defend the country” is its logical property. – “A research report is a written material made up of hundreds of pages, being a source of information on a research project” in which “written material” is the genus, and “a source of information on a research project” is the cause or reason of its appearance.

A real definition is expected to meet the following requirements: – It should not be too narrow, e.g. it should not define bachelor as “unmarried adult male worker” because some bachelors are not workers. – It should not be too broad, e.g. it should not define bachelor as “unmarried adult” because not all unmarried adults are bachelors. – It should not be circular, i.e. the definiens should not contain terms which are synonymous with the definiendum, e.g. it should not describe musicality as “the quality or state of being musical”. – It should not be obscure because its purpose is to explain the meaning of a term which may be obscure or difficult. – It should not be negative where it can be positive, e.g. it should not describe wisdom as “the absence of silliness”. Another important dichotomic classification of definitions distinguishes intensional definitions and extensional definitions. The intensional definition of a term is the definition which gives the meaning of this term by specifying necessary and sufficient conditions for when it should be used. In the case of nouns, this is equivalent to specifying the properties that an object needs to have in order to be counted as a referent of the term. An intensional definition specifies the necessary and sufficient conditions for a thing being a member of a specific set. Any definition that attempts to set out the essence of something, such as that by genus and differentia, is an intensional definition. The extensional definition of a term is the definition which contains the list of all the things to which the term applies. An extensional definition of a term formulates its meaning by specifying its extension, i.e., every object that falls under the definition of this term.

12

2 Selected concepts of logic and philosophy

For the sake of efficient communication of abstract ideas, the so-called stipulative definitions are used quite frequently in technoscientific texts. A stipulative definition is a definition in which a new or currently existing term is given a new specific meaning for the purposes of argument or discussion in a given context. It imparts a meaning to the defined term and involves no commitment that the assigned meaning agrees with prior uses (if any) of that term.

2.2 Elements of formal logic 2.2.1 Statements and arguments Logic8 is a formal science involved in the study of correct reasoning or the study of valid arguments. It used to be a branch of philosophy, and even today it is sometimes specified as such. In logic, statements are usually understood as declarative sentences, i.e. sentences which assert or deny something, and which are true or false. The term proposition is generally treated as a synonym of the term statement. Logical positivists, however, made a distinction by restricting propositions to statements which are meaningful in the sense of the so-called verificationist theory of meaning. For this reason, the term statement is used in this résumé of propositional logic. The statements will be denoted here with capital letters, e.g. P, Q, R, ... The symbol PðxÞ means that the statement P depends on a variable x; the symbol Pðx, yÞ – that it depends on two variables x and y. The variable-dependent statements are called predicates; thus, PðxÞ is the predicate on x, and Pðx, yÞ is the predicate on x and y. Example 2.3: If the symbol Pðx, y Þ stands for “x likes y”, where x 2 fJohn, Maryg and y 2 fcoffee, teag, then the predicate Pðx, y Þ may have four atomic realisations, viz.: “John likes coffee”, “Mary likes coffee”, “John likes tea” and “Mary likes tea”.

The elementary logical operations on statements P and Q (as well as on the corresponding predicates) – i.e. conjunction, disjunction, negation and implication – are  and P ! Q. denoted here with the following symbols: P ^ Q, P _ Q, P An argument (more precisely, a valid argument) is a set of statements fP1 , P2 , ..., PN , Qg, the last of which Q (called conclusion) follows from the rest of them P1 , P2 , ..., PN (called premises). The argument is symbolically represented by the formula:

8 from Greek logos = “word” or “thought”.

2.2 Elements of formal logic

13

P1 , P2 , ..., PN ) Q where the commas denote conjunctions, and the symbol “)” denotes implication guaranteeing that Q is true if each of the premises P1 , P2 , ..., PN is true. The use of alternative symbols for conjunction and implication, at this level of notation, improves the legibility of arguments by reducing the number of necessary brackets and parentheses.

2.2.2 Elementary rules of deductive inference There is a number of elementary arguments, called rules of deductive inference, which constitute elements of more complex arguments. The most important of them are (in alphabetic order): addition, association (two variants), bidirectional dilemma, commutation (three variants), composition, conjunction, constructive dilemma, de Morgan’s theorem (two variants), destructive dilemma, disjunctive syllogism, distribution (two variants), double negation, exportation, hypothetical syllogism, importation, law of non-contradiction, material equivalence (three variants), material implication, rule of detachment (in Latin: modus ponens), rule of contrapositive (in Latin: modus tollens), simplification, tautology (two variants), law of excluded middle (in Latin: tertium non datur) and transposition. Here only four of them – disjunctive syllogism, hypothetical syllogism, rule of detachment and rule of contrapositive – are briefly recalled since they will be referred to in the chapters to follow. The disjunctive syllogism is defined by the following formula:  )Q P _ Q, P

Example 2.4: Premise #1: John can log on to his Google account using either a password or the e-mail address (P _ Q).  Premise #2: John has no password, so he cannot log on using a password (P). Conclusion: John can log on to his Google account using the e-mail address (Q).

The hypothetical syllogism (called also chain deduction) is defined by the following formula: P ! Q, Q ! R ) P ! R

Example 2.5: Premise #1: If it rains (P), then I shall not go to school (Q). Premise #2: If I don’t go to school (Q), then I won’t need to do homework (R). Conclusion: If it rains (P), I won’t need to do homework (R).

14

2 Selected concepts of logic and philosophy

The rule of detachment (called also modus ponens) is defined by the following formula: P ! Q, P ) Q

Example 2.6: Premise #1: If John has the password (P), then he can log on to his Google account (Q). Premise #2: John has the password (P). Conclusion: John can log on to his Google account (Q).

The most notorious fallacy, related to the rule of detachment, is that of denying the antecedent:  ðsic!Þ  )Q P ! Q, P

Example 2.7: Premise #1: If you are shoemaker (P), then you have a job (Q).  Premise #2: You are not a shoemaker (P).  Conclusion: You have no job (Q).

The rule of the contrapositive (called also modus tollens) is defined by the following formula:  )P  P ! Q, Q

Example 2.8: Premise #1: If John has the password (P), then he can log on to his Google account (Q).  Premise #2: John cannot log on to his Google account (Q).  Conclusion: John has no password (P).

The most notorious fallacy, related to the rule of the contrapositive, is that of affirming the consequent: P ! Q, Q ) P ðsic!Þ

Example 2.9: Premise #1: If I read all night (P), then I get tired (Q). Premise #2: I got tired (Q). Conclusion: I read all night (P).

2.2 Elements of formal logic

15

The assertion that a statement P is a necessary and sufficient condition of a statement Q means that the statement P is true if and only if the statement Q is true, i.e. both statements must be either simultaneously true or simultaneously false.

2.2.3 Advanced rules of deductive inference A predicate is an assertion containing one or more variables such that, if the variables are replaced with specific elements of a given universal set X, then it takes on the form of a statement. Some important logical operations on predicates are performed by means of quantifiers, i.e. prefixed operators that bind variables in a logical formula by specifying their quantity: – The universal quantifier, denoted with the symbol ∀, appears in the structure of the form: ∀x 2 X : PðxÞ, which means that the predicate with one variable PðxÞ is true for all x 2 X. – The existential quantifier, denoted with the symbol 9, appears in the structure of the form: 9x 2 X : PðxÞ, which means that there exists x 2 X such that PðxÞ is true. There are some alternative ways to express both quantifiers in the natural language: – “for all x 2 X” may be replaced with “for each x 2 X“ or with “for every x 2 X”; – “there exists x 2 X“ may be replaced with “there exists at least one x 2 X” or “there exist some x 2 X”. Both above-defined quantifiers can be applied to predicates with two or more variables. Example 2.10: The formula ∀x 2 X 9y 2 Y: Pðx, y Þ, where Pðx, y Þ is a predicate with two variables, means: “for each x 2 X, there exists at least one y 2 Y for which Pðx, y Þ is true”. The meaning of the formula 9y 2 Y ∀x 2 X : Pðx, y Þ is significantly different: “there exists at least one y 2 Y that Pðx, y Þ is true for every x 2 X”.

There are two basic rules of inference for quantified statements which follow from the definition of the universal quantifier (universal instantiation and universal generalisation) – and two basic rules of inference for quantified statements which follow from the definition of the existential quantifier (existential instantiation and existential generalisation); they are defined in Table 2.1 and next illustrated with Example 2.11.

16

2 Selected concepts of logic and philosophy

Table 2.1: Basic rules of inference for quantified statements. Rule Name of rule Premise

Conclusion

) UNIVERSAL INSTANTIATION

∀x 2 X: Pðx Þ

PðcÞ for a particular c 2 X

) UNIVERSAL GENERALISATION

PðcÞ for an arbitrary c 2 X

∀x 2 X: Pðx Þ

) EXISTENTIAL INSTANTIATION

9x 2 X: Pðx Þ

PðcÞ for some particular c 2 X

) EXISTENTIAL GENERALISATION

PðcÞ for some particular c 2 X

9x 2 X: Pðx Þ

Example 2.11: Let X be a group of Ph.D. students attending the course of research methodology. The following arguments illustrate the rules of inference summarised in Table 2.1: 1) All students in the group X are intelligent; John is a student belonging to the group X. So, John is intelligent. 2) Each student selected from the group X turns out to be intelligent. So, all students from the group X are intelligent. 3) There is at least one intelligent student in the group X. So, one may find an intelligent student in the group X. 4) One may find an intelligent student in the group X. So, there is at least one intelligent student in the group X.

More complex arguments may be constructed by using also other rules of deductive inference described in Section 2.2, e.g. the rule modus ponens which takes on the form: ∀x 2 X : PðxÞ ! QðxÞ, PðcÞ ) QðcÞ for a particular c 2 X Example 2.12: Premise #1: “a < b implies a + c < b + c”. Premise #2: “0 < x 2 − 2xy + y 2 ”. Conclusion (obtained by adding 2xy to both sides of the above inequality): “2xy < x 2 + y 2 ”.

2.2.4 Rules of inductive and abductive inference Deductive reasoning starts with the assertion of a general rule, and proceeds to a guaranteed specific conclusion: if the premises are true, then the conclusion must also be true. There are two ways of inverting deductive arguments: inductive and abductive reasoning, which do not provide guaranteed conclusions. Inductive reasoning begins with some specific observations and proceeds to a generalised

2.3 Definition and structure of philosophy

17

conclusion which is likely, but not certain, in light of the premises; conclusions reached by the inductive reasoning are not logical necessities. Abductive reasoning begins with the assertion of a general rule and an incomplete set of observations and proceeds to their probable explanation. The conclusions of deductive reasoning cannot contain more information than its premises, while both inductive and abductive reasoning is ampliative, i.e. it is adding something not contained in the premises. Example 2.13: Let’s consider the following example of universal instantiation: Premise #1 (RULE): “All the balls from this box are white.” Premise #2 (CASE): “This ball is from this box.” Conclusion (RESULT): “This ball is white.” Inductive reasoning is the inference of RULE from the CASE and RESULT: Premise #1 (CASE): “This ball is from this box.” Premise #2 (RESULT): “This ball is white.” Conclusion (RULE): “All the balls from this box are white.” Abductive reasoning is the inference of CASE from RULE and RESULT: Premise #1 (RULE): “All the balls from this box are white.” Premise #2 (RESULT): “This ball is white.” Conclusion (CASE): “This ball is from this box.”9

2.3 Definition and structure of philosophy 2.3.1 Definition of philosophy In his book, addressed to general public, the British philosopher Simon Blackburn (*1944) has tagged philosophy with a very concise label “conceptual engineering”, and explained that “as the engineer studies the structure of material things, so the philosopher studies the structure of thought”10. More precise definition of philosophy is the subject of meta-philosophy which is trying not only to say what philosophy is, but also to answer such questions as “What is philosophy for?” and “How should philosophy be done?”11. Both historical and contemporary schools of

9 inspired by C. S. Peirce, “Deduction, Induction, and Hypothesis”, The Popular Science Monthly, August 1878, pp. 470–482. 10 S. Blackburn, Think: A Compelling Introduction to Philosophy, Oxford University Press, Oxford (UK) 1999, “Introduction”. 11 N. Joli, “Contemporary Metaphilosophy”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), http://www.iep.utm.edu/con-meta/ [2017-03-25].

18

2 Selected concepts of logic and philosophy

philosophy differ significantly in answering those questions. The definitions of philosophy found in the dictionaries are, therefore, quite diversified; each is emphasising its slightly different aspect, e.g.: – love of knowledge, pursuit of wisdom12 (Online Etymology Dictionary); – (1) a love or pursuit of wisdom, a search for the underlying causes and principles of reality, (2) a quest for truth through logical reasoning rather than factual observation, (3) a critical examination of the grounds for fundamental beliefs and an analysis of the basic concepts employed in the expression of such beliefs, (4) a synthesis of learning (Webster’s Third New International Dictionary); – the search for truth and knowledge concerning the universe, human existence, perception and behaviour, pursued by means of reflection, reasoning and argument (Chambers 21st Century Dictionary); – the investigation of the nature, causes, or principles of reality, knowledge or values, based on logical reasoning rather than empirical methods (American Heritage Dictionary); – the study of the ultimate nature of existence, reality, knowledge and goodness, as discoverable by human reasoning (Penguin English Dictionary); – the search for knowledge and truth, especially about the nature of man and his behaviour and beliefs (Kernerman English Multilingual Dictionary); – the study of the most general and abstract features of the world and categories with which we think: mind, matter, reason, proof, truth, etc. (Oxford Dictionary of Philosophy). In contrast to science, philosophy is dealing with questions which are usually foundational and abstract in nature. Consequently, philosophy is done primarily through reflection and does not tend to rely on experiments, although the situation seems to have changed recently since a new field of philosophical inquiry has appeared which is called experimental philosophy13. It makes use of empirical data (often gathered through opinion polls) in order to inform research on philosophical questions.

2.3.2 Structure of philosophy The historical core of philosophy is called metaphysics. Its name refers to the Greek phrase ta meta ta physika (which means “the works after the works on physics”) indicating 14 treatises, written by Aristotle, which appeared in his collected works

12 from Greek philo = “loving” + sophia = “knowledge” or “wisdom”. 13 A. Plakias, “Experimental Philosophy”, [in] Oxford Handbooks Online, 2015, http://www.ox fordhandbooks.com/view/10.1093/oxfordhb/9780199935314.001.0001/oxfordhb-9780199935314e-17 [2017-03-27].

2.4 Key concepts of epistemology

19

after the treatises concerning physics and other contents covered today by natural sciences. Metaphysics has two principal divisions: general metaphysics and special metaphysics. General metaphysics is concerned with the general nature of reality: with problems related to abstract and concrete being, the nature of particulars and the distinction between appearance and reality. Special metaphysics is concerned with certain particular aspects of being, such as the distinction between the mental and the physical, the possibility of human freedom, the nature of personal identity, the possibility of survival after death and the existence of God14. Metaphysics covers the key issues which today belong to the philosophical disciplines: ontology and epistemology. Ontology15 is involved in the philosophical study of the nature of being, becoming, existence or reality as well as of the basic categories of being and relations among them. Epistemology16 investigates the origin and nature of human knowledge, as well as general methods and limits of its acquisition. The following are the most important, well-established, branches of philosophy which cannot be reduced neither to ontology nor to epistemology: aesthetics (or philosophy of art), axiology (or philosophy of value), ethics (or moral philosophy), philosophical anthropology, philosophy of education, philosophy of history (or historiosophy), philosophy of information, philosophy of language, philosophy of law (or jurisprudence), philosophy of medicine, philosophy of mind, philosophy of politics (or political philosophy), philosophy of religion, philosophy of science, philosophy of technology and social philosophy. Two of them, ethics and philosophy of science, are of particular interest in this book; therefore, they will be defined and broadly characterised in Chapters 4 and 11, respectively.

2.4 Key concepts of epistemology 2.4.1 Truth According to the traditional understanding of science, its purpose is to discover the truth about reality. As the hierarchically highest value in the group of cognitive values, truth is therefore considered the central value of science. In everyday discourse, truth is understood as the property of thoughts, which consists in being in accord with reality. This intuition was formalised in the correspondence definition of truth, attributed to Aristotle and formulated by the Italian theologian

14 B. Aune, Metaphysics: The Elements, [in] University of Minnesota Press, Minneapolis 1985, p. 11. 15 from Greek on = “existence” + logos = “word” or “thought”. 16 from Greek episteme = “knowledge” + logos = “word” or “thought”.

20

2 Selected concepts of logic and philosophy

and philosopher Saint Thomas Aquinas (1225–1274) in the following way: “Veritas est adaequatio rei et intellectus” which means “Truth is the conformity of thing and intellect”17. There are two principal traditions of thinking about truth. According to the first of them, truth is an objective property of our beliefs in virtue of which they correspond to the world; it connects our thoughts and beliefs to some external reality, thereby giving them representational content; thus, it is an external constraint on what we believe. According to the second tradition, truth is a normative concept since it summarises the norms of correct justification of beliefs; thus, it is an internal constraint on what we believe. The correspondence definition of truth, belonging to the first tradition of thinking about truth, is sufficient for everyday discourse, despite the fact that it has been criticised by epistemologists for various reasons, mainly because: – it does not provide any criterion of truth, – it refers to the concept of correspondence (or conformity) which is unclear, – it raises logical problems when applied for explaining the truthfulness of negative sentences or tautologies. A more complete overview of the criticism raised against the correspondence definition of truth – as well as of various attempts to overcome its weak points – may be found in the relevant literature18. For scientists, however, the lack of the criterion of truth is of primary importance because they need a tool for distinguishing the truth from falsehood or error. Unfortunately, there is no logically acceptable means for comparison of abstract statements with real objects or facts. Hence the interest for alternative definitions (theories) of truth, belonging to the second tradition of thinking about truth, viz.: – the coherence definition of truth (a statement is true if it is coherent with some specified set of statements); – the consensus definition of truth (a statement is true if it is generally accepted); – the evidence definition of truth (a statement is true if it is evident); – the pragmatic definition of truth (a statement is true if it is useful). In fact, the above definitions do not offer anything more than partial criteria of truthfulness; so, we should probably agree with the German logician

17 “Quaestiones Disputatae: De veritate”, [in] Saint Thomas Aquinas’ Works in Latin and English, https://dhspriory.org/thomas/QDdeVer.htm [2018-07-15]. 18 D. Marian, “The Correspondence Theory of Truth”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Fall 2016 Edition, https://plato.stanford.edu/archives/fall2016/entries/truth-corre spondence [2018-07-14].

2.4 Key concepts of epistemology

21

F. L. Gottlob Frege (1848–1925) that truth is indefinable, and that the meaning of the adjective true should be taken as intuitively obvious19. The criteria of truth, implied by the above definitions, have been used in science for centuries: – the researchers check a new claim by examining its compliance with the statements which are considered true or by comparing it with the statements (about the same facts or phenomena) proposed by other researchers; – the researchers accept most fundamental statements (e.g. the axioms of Euclidean geometry) because of their obviousness; – the researchers accept some claims because the latter enable solutions of significant practical problems (e.g. in the field of technology or medicine). The above criteria should be used with precautions because: – The inconsistency of a new claim with those previously recognised as true may mean that one of the latter is false. – The non-obviousness of a new claim may indicate the need for its further investigation, and not necessarily the basis for its rejection. – The lack of prospects for application of a new claim does not mean that such prospects will never appear. The insufficiency of the criteria of truth, implied by its definitions, has motivated scientists and philosophers of science to develop more sophisticated methods of scientific justification; Chapter 9 is entirely devoted to their systematic presentation. Although the epistemological discourse on the concept of truth is rather hermetic, and does not attract much interest among research practitioners, it has an impact on their professional awareness because it is echoed by the media, quite frequently in the form distorted by ignorant journalists or scientific “gurus” of postmodernist orientation. The latter are inclined to opt for turning away from the truth in science by its far-going relativisation. The dependence of truth on various circumstances (such as a cultural context or methodology of justification) is universally recognised; relativism has become, therefore, an important aspect of many philosophical conceptions. However, its extreme version – which may be summarised by the statement “all judgments about reality are relative” – leads to solipsism20 excluding the possibility of any intersubjective agreement. On the other hand, some moderate versions of relativism in science seem to be justified because they take into account centuries of cognitive experience and various sources of the uncertainty of scientific knowledge (cf. Chapter 10). In general, research practitioners are not eager

19 J. Woleński, “The History of Epistemology”, [in] Handbook of Epistemology (Eds. I. Niiniluoto, M. Sintonen, J. Woleński), Springer, Dordrecht 2004. 20 the philosophical belief that only one’s own experience and mind can be known.

22

2 Selected concepts of logic and philosophy

to disclose their relativistic beliefs, even in a moderate version. A probable reason for this reluctance is the fear of depreciation of their own scientific activity which by definition is supposed to deliver truth as it is understood by a lay majority of society. The status of being true or false may be attributed only to the statements having the form of declarative sentences (not to orders, wishes or questions). The purpose of science is to formulate such claims which are not only true but also original and significant. A claim lacking any of these attributes is not considered a scientific contribution, even if it is useful from a practical (technical, educational, medical, legal, etc.) point of view.

2.4.2 Knowledge When comparing definitions of knowledge, provided by the dictionaries of English language, one can distinguish between two kinds of knowledge, viz.: knowledge of propositions (or propositional knowledge) and know-how (or ability knowledge)21. Science is focused on getting the first kind of knowledge; therefore, it will be the exclusive topic of further considerations. According to the definition, attributed to the Ancient Greek philosopher Plato (429–347 BC), knowledge is a “justified true belief”. The interpretation of this definition depends on the meaning of three terms which appear in its definiens – the noun “belief” and two adjectives: “true” and “justified”. The first of them is explained by the dictionaries of English language in various ways: – “a conviction of the truth of some statement or the reality of some being or phenomenon, especially when based on examination of evidence” (online Merriam-Webster Dictionary); – “a feeling of certainty that something exists, is true, or is good” (online Collins Dictionary); – “something accepted as true, especially a particular tenet or a body of tenets accepted by a group of persons” (The Free Dictionary); – “any cognitive content held as true” (The Free Dictionary – Thesaurus). Three of the above formulations suggest that a belief is true by definition. Moreover, contemporary analytic philosophers of mind generally use the term “belief” to refer to the state of our minds “whenever we take something to be the case or regard it as true”22. Thus, the definiens in the definition of knowledge could be reduced to “a justified belief”. Such a modification seems to be reasonable in the case of scientific 21 D. Pritchard, What Is This Thing Called Knowledge? Taylor & Francis, London – New York 2010, p. 7. 22 E. Schwitzgebel, “Belief”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Summer 2015 Edition, https://plato.stanford.edu/archives/sum2015/entries/belief [2018-07-15].

2.4 Key concepts of epistemology

23

knowledge because the only way to prove the veracity of a new scientific belief is its justification using means satisfying certain methodological requirements. However, beyond the context of scientific inquiry, rigorous methods of justification are applied rather exceptionally, which means that in everyday life we accept many statements which are false or statements which are true by chance23. Example 2.14: The British philosopher and mathematician Bertrand A. W. Russell (1872–1970) illustrated the latter option with the following story. A man, let’s call him X, sees that an old clock in the hall is showing “8:20”. X comes to the conclusion that it is 8:20 a.m., and this belief is true since by chance it is 8:20 a.m. His belief is justified because it is based on his many-year experience with the clock which has always been very reliable at telling correct time. He has no reason to think that it is faulty now: that it stopped 24 hours earlier.24

In a three-page paper, published in 196325, the American philosopher Edmund L. Gettier (*1927) demonstrated that there are situations (similar to that described in the above example) when having a justified true belief regarding a claim is not enough to know it because the reasons for the belief, while justified, turn out to be false. He showed in this way that the justified-true-belief definition of knowledge does not account for all of the necessary and sufficient conditions for knowledge. His contribution has generated an extensive philosophical debate aimed at responding to what became known as the Gettier problem26.

2.4.3 Scientific laws and theories A scientific law is an elementary piece of scientific knowledge, well-established in a scientific discipline or a group of disciplines. It has the form of a statement, frequently including a mathematical equation, that in an abstract way characterises a large collection of empirical facts. As a rule, a scientific law is narrower in scope than a scientific theory which may comprise several laws. More elaborated characterisation of scientific laws will be provided in Section 7.3. In everyday English, the term theory is used as a rather pejorative label for speculative, vague, unrealistic or impractical explanation of certain facts or situations. In science, it has a more precise connotation, regardless whether it is used with the adjective “scientific” or not. Theory27 is understood there as the comprehensive

23 D. Pritchard, What Is This Thing Called Knowledge? 2010, p. 7. 24 ibid., p. 25. 25 E. L. Gettier, “Is Justified True Belief Knowledge?”, Analysis, 1963, Vol. 23, pp. 121–123. 26 S. Hetherington, “Gettier Problems”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/gettier/ [2018-07-17]. 27 from Greek theoria = “contemplation” or “speculation” or “looking at”.

24

2 Selected concepts of logic and philosophy

form of scientific knowledge related to a well-defined part of the natural world – a coherent set of statements, comprising at least some of the following items: – an extensive record of empirical facts repeatedly confirmed through experiments or observations, – an integrated set of principles and laws that enable one to explain and predict events that are observed in the subject domain of reality (natural world), – a collection of interrelated semantic and mathematical models of phenomena, events and processes taking place in that domain, – a collection of predictions and explanations derived from the above three items.

Example 2.15: Here are five examples of well-established scientific theories: the oxygen theory of combustion (Antoine-Laurent de Lavoisier, 1770s); the theory of evolution by natural selection (Charles R. Darwin, 1859); the theory of electromagnetic field (James Clerk Maxwell, 1864); the theory of general relativity (Albert Einstein, 1915); the information theory (Claude E. Shannon, 1948).

3 Science in historical perspective The contents of this chapter, as well as the contents of the Appendix “Milestones in the history of science”, associated with this chapter, have been based on numerous encyclopaedic sources, mainly on the articles devoted to the history of science and great figures of science, published in Encyclopædia Britannica Online1, in Internet Encyclopedia of Philosophy2 and in the book Human Accomplishment3. The selection of historical facts presented here should be considered subjective since it cannot be justified by any other criteria than the author’s personal experience related to 50-year-long involvement in broadly understood scientific activity. The same applies to the periodisation of the history of science, i.e. to the subdivision of its chronological development into three periods: protoscience, classical science and technoscience.

3.1 Protoscience Both cave paintings and regular scratches on excavated animal bones and horns seem to suggest that already our prehistoric ancestors were the curious observers of nature. For some unexplained reasons, their cognitive activity intensified about 4500 years ago: large stone structures from that time, such as Stonehenge on the Salisbury Plain in England, seem to prove it. On the one hand, they suggest both religious and astronomical dedication, on the other – indicate quite unusual technical and social skills that were necessary to move, at considerable distances, and position those enormous blocks of stone. Such a combination of religious and astronomical purposes was characteristic of cultures developed in ancient Mesopotamia and Egypt, in India and China, as well as in Central America. The outline of the protoscience development, provided here, is limited to the Western and MiddleEast tradition.

3.1.1 Ancient Egypt and Mesopotamia The protoscience of ancient Egypt was mainly focused on technology-related knowledge. The Egyptians invented and used many mechanical machines facilitating construction processes, such as the ramp and the lever. Many architectural monuments of ancient Egypt are still standing today, i.a. the pyramids of Giza and

1 located at https://www.britannica.com/ [2018-06-26]. 2 located at http://www.iep.utm.edu/ [2018-06-26]. 3 C. Murray, Human Accomplishment, HarperCollins, New York 2003. https://doi.org/10.1515/9783110584066-003

26

3 Science in historical perspective

the Great Sphinx belonging to the largest and most famous buildings in the world. The Egyptians played an important role in developing Mediterranean maritime technology including ships and lighthouses. As early as 3400 BC, they mastered the processes of extracting copper from its ores; mercury was found in Egyptian tombs dated 1600–1500 BC; iron was known and used in Egypt about 800 BC. Glass jars, bottles and beads, in a variety of colours, were produced in Egypt already around 1500 BC. The word paper comes from papyros being the Greek name of the ancient Egyptian writing material that was formed from beaten strips of paper plants; it was produced as early as 3000 BC and sold to ancient Greece and Rome. Practical needs of Egypt motivated not only the development of technology but also of astronomy, mathematics and medicine. The 365-day calendar was used there already 3000 BC; the annual flooding of the Nile river was predicted on the basis of the observation of stars; the pyramids were precisely oriented on the polar star. Astronomy also played a considerable role in fixing the dates of religious festivities. Egyptian medical practice included simple non-invasive surgery, setting of bones, dentistry and an extensive use of pharmacological means of natural origin. The ancient Egyptians were at least partially aware of the importance of the balance and moderation of the diet. They discovered the medicinal properties of many plants including garlic, onion, pomegranates, berries from the castor oil tree, various kinds of beans and cannabis. Egyptian physicians were aware of the existence of the pulse and of a connection between pulse and heart. They developed a theory of “channels” transporting air, water and blood in the body; they believed that the blockage of those channels could cause illness, and therefore used laxatives to unblock them. The extensive use of surgery, mummification practices and autopsy – as a religious exercise – gave Egyptians a vast knowledge of the body’s morphology and even a considerable understanding of its physiology. Written evidence of the use of mathematics, including the base-ten number system, dates back also to 3000 BC. The earliest strictly mathematical documents appeared ca. 1650 BC. They recorded a collection of problems with solutions; they showed, for example, how multiplication and division was carried out, or how the algebraic linear equations were solved. They also contained evidence of other mathematical knowledge, including unit fractions, composite and prime numbers, harmonic means or arithmetic and geometric series. The protoscience of ancient Mesopotamia, like that of Egypt, was focused on technology-related knowledge, mathematics and astronomy. The life in that region depended on the great rivers, Tigris and Euphrates; the land was made habitable only by extensive damming and irrigation works. Considerable technological skills were necessary to enable stable functioning of society and to make its life secure. Protoscientific knowledge and technical skills, integrated with religious convictions and rites, played crucial role in survival of that society. Mathematics and astronomy were of primary importance: the number

3.1 Protoscience

27

system based on the module of 60 (the same we use today for measuring time and angle) was a top achievement of Mesopotamian mathematics, and a sophisticated description of celestial bodies – of Mesopotamian astronomy. Some texts, dated 1700 BC, prove that Mesopotamian mathematicians knew and used the trigonometric relationship that we call today Pythagorean theorem; they could also solve simple quadratic equations and estimate the circumference of a circle.

3.1.2 Ancient Greece According to Hellenic tradition, the Greek thinker and mathematician Thales of Miletus (624–546 BC) was the first natural philosopher who attempted to replace mythological explanations of natural phenomena with explanations referring to observation and experience. He is supposed to have predicted a solar eclipse in 585 BC and to have invented the formal study of geometry in his demonstration of the bisecting of a circle by its diameter. He is also remembered for trying to explain all observed natural phenomena in terms of the changes of a single substance, water, which appears in solid, liquid and gaseous states. In critical response to this doctrine, various single substances were proposed by other natural philosophers and then rejected, ultimately in favour of a plurality of elements that could account for the multitude of the qualities of matter. Two centuries later, most natural philosophers accepted a doctrine of four elements all bodies were composed of: earth, fire, water and air. An alternative tradition of thinking, idealistic or mystical, has been attributed to the Greek philosopher and mathematician Pythagoras of Samos (570–495 BC) who claimed that numbers were the constitutive elements of physical reality (“All things are numbers”). The Greek protoscience, built upon the foundations laid by Thales and Pythagoras, reached its zenith in the works of Hippocrates, Aristotle, Euclid, Herophilos and Archimedes. Hippocrates of Kos (460–370 BC) was an outstanding Greek physician, often referred to as the “father of modern medicine”. He founded a medical school which produced more than 50 books presenting a system of medical methodology and ethics which is still practised today. Hippocrates stated that medicine is not philosophy, and therefore must be exercised on a case-by-case basis rather than by referring to the “first principles”. He advocated clinical observations, diagnosis and prognosis, and argued that specific diseases come from specific causes. His methodology relied on physical examination of the patient and abductive reasoning from observations supported by the theory of four humours (yellow bile, blood, phlegm and black bile) controlling the physiological processes in a human organism. The Hippocratic teaching was highly influential, and it contributed to the rise of rationality not only in medicine but also in other sciences.

28

3 Science in historical perspective

Aristotle of Stagira (384–322 BC) was a Greek philosopher and polymath whose writings – covering panoply of subjects which today would be classified as issues of astronomy, physics, zoology, psychology, metaphysics, logic, ethics, aesthetics, fine arts, rhetoric, linguistics and politics – constituted the first comprehensive system of Western philosophy. Aristotle considered observation and deductive reasoning as the proper means of investigation while active experiment, aimed at the explanation of the hidden properties of the object of study, was not essential for him. He remained one of the most influential thinkers during more than 20 centuries after his death; his observations of marine organisms, for example, were unsurpassed until the nineteenth century, and provided the framework for biology until the time of Charles R. Darwin (1809–1882). After the conquests of Alexander the Great (356–323 BC), the achievements of Mesopotamian protoscience started to be available to Greeks, and the city of Alexandria (located in the northern Egypt) started to be the intellectual capital of the ancient world. Euclid of Alexandria (365–300 BC), a Greek mathematician referred to as the “father of geometry”, is the author of a treatise consisting of 13 books, entitled Elements, which has become one of the most influential works in the history of mathematics and – due to the close affinity of geometry to mechanics – in the history of science. In this treatise, Euclid deduced the theorems of what is now called Euclidean geometry from a set of five axioms and five postulates. Herophilos of Chalcedon (335–280 BC) was a Greek physician considered to be the first anatomist since his contribution to medicine was based on systematic investigation of dissected human cadavers. He studied the flow of blood, was able to differentiate between arteries and veins and worked out a standard for measuring a pulse by means of a water clock. He also studied the brain and hypothesised that it is the location of intellect. On the one hand, Herophilos was influenced by the achievements of the Hippocrates school of medicine; on the other, his achievements inspired several generations of great physicians of antiquity, including the last of them – Aelius Galenus (129–200 AD). Herophilos, like Euclid, spent the majority of his life in Alexandria. Alexandria was up to certain extent also the city of Archimedes of Syracuse (287–212 BC) since he studied there. His contributions to the development of mathematics, astronomy, physics and engineering made him one of the greatest figures of Greek protoscience. He anticipated modern calculus by applying concepts of infinitesimals, and provided proofs of numerous geometrical theorems, e.g. those concerning the area of a circle and the volume of a sphere. He pioneered application of mathematics for modelling physical phenomena, provided mathematical demonstration of the law of the lever and founded hydrostatics. The confrontation of Aristotelian astronomy, which assumed that planetary orbits are circles, with the Mesopotamian observations and mathematical methods demonstrated the discrepancy between theory and observation. Consequently,

3.1 Protoscience

29

astronomy started to search for mathematical models that could enable prediction of planetary positions in a way more precise than the Aristotelian causal interpretation of heavenly motions. Claudius Ptolemaeus (100–170) was the most prominent representative of this approach: he presented his astronomical models in the form of tables which could be used for computing the future or past positions of the planets. His treatise Almagest used to be the most authoritative text on astronomy across Europe, Middle East and North Africa in the Medieval period. The geocentric model of the solar system, presented there, “survived” till the sixteenth century.

3.1.3 Roman Empire The rise of Roman power over the Mediterranean region implied a fusion of Greek art, literature, philosophy and science with Roman common sense and practical approach to state organisation (law). On the one hand, the spirit of independent research was suppressed and the scientific legacy of Greece was condensed into Roman encyclopaedias whose major function was entertainment rather than enlightenment; on the other, however, the Roman Empire incorporated a multitude of peoples with different customs, languages and religions which enhanced the Greek tradition with new elements, including elements of Christianity. Ancient learning, thus, did not die with the invasion of the Western Empire by Germanic tribes. Christian monks, in monasteries, carefully copied out classics of ancient thought and preserved them for posterity. The Eastern (Byzantine) Empire remained even stronger in this respect, and continued the ancient intellectual traditions till the fifteenth century.

3.1.4 Arab empire In the seventh century, the Arab tribes – inspired by their new religion, Islam – burst out of the Arabian Peninsula, and laid the foundations of a new empire that eventually rivalled that of Ancient Rome. Since according to the Quran, the sacred book of Islam, science was a precious treasure, it was carefully reconstructed and developed by the Islamic intellectuals during next five centuries. Greek astronomy and astrology, and Greek mathematics, together with the great philosophical works of Aristotle were sought and translated already by the end of the ninth century. Significant innovations were introduced to Ptolemaic astronomy. A new branch of mathematics, algebra, was created by Muḥammad ibn Mūsā al-Khwārizmi alias Algoritmi (780–850). In the treaty The Compendious Book on Calculation by Completion and Balancing, he initiated algorithmic thinking in mathematics; hence the term algorithm derived from his Latin name. Islamic scientists introduced experimental methods to numerous domains of scientific inquiry: Jābir ibn Hayyān alias

30

3 Science in historical perspective

Geber (721–815), inventor of the process of distillation, to alchemy; Abd Allāh ibn Sīnā alias Avicenna (980–1037) – to physiology; Abū Rayhān al-Bīrūnī alias Biruni (973–1048) – to mechanics and psychology, Ibn al-Haytham alias Alhazen (965– 1040) – to optics and ophthalmology.

3.1.5 Medieval Europe The Reconquista of the Iberian Peninsula by European Christians, which started in the eighth century, made available much of the ancient heritage to the Latin West already by the end of the twelfth century. The ancient knowledge, preserved in early Middle Ages by the monasteries, was propagated to the generations in the Christian cathedral and monastic schools. In the high Middle Ages, some of them were transformed into universities, to mention only the first: the University of Bologna established in 1088, the University of Salamanca in 1134, the University of Paris in 1150 and the University of Oxford in 1167. This institutionalisation of knowledge coincided with a spectacular technological progress which included the invention of the crank and the flying buttress, making possible the elevation of great Gothic cathedrals. Protoscientific knowledge itself, however, was viewed by medieval people mainly as a means for understanding God’s creation. For example, Robert Grosseteste (1175–1253), the cleric scholar from the University of Oxford, saw in light the first creative impulse; according to him, understanding the laws of the propagation of light was equivalent to understanding the nature of God’s creation. Medieval philosophers thus studied nature-related problems of theological significance; they examined, in particular, all aspects of motion which had important theological implications. The Italian theologian and philosopher Saint Thomas Aquinas, for example, referred to the Aristotle’s argument “everything that moves is moved by something else” to prove that God must exist, for otherwise the existence of any motion would imply an infinite regression of prior causal motions.

3.2 Classical science 3.2.1 From Copernicus to Newton The first serious blow to the traditional acceptance of ancient authorities was the discovery of new continents at the end of the fifteenth century: it turned out that both ancient and Christian authorities, insisting on the existence of three continents only, were wrong. In the period of European Renaissance, the complete recovery of the ancient heritage followed: to the works of Aristotle, already assimilated, the translations of Aelius Galenus and of Archimedes of Syracuse were added. The search for antiquity resulted in a set of manuscripts that gave a decisive

3.2 Classical science

31

impulse to the radical change which is sometimes called scientific revolution. The year 1543 is usually considered a starting point of this process since then a Polish canon, Nicolaus Copernicus (1473–1543), decided to disclose his work De revolutionibus orbium coelestium4 in which he challenged the geocentric astronomical system: he demonstrated, by both results of observation and mathematical calculations, that placing Sun instead of Earth in the centre of the system yielded much simpler and more consistent explanation to numerous astronomical phenomena which had been difficult to accommodate within the geocentric system. He was able to place the planets in order of their distances from Sun by considering their speeds and thus to construct a simple and coherent system of the planets. During a century following the publication of De revolutionibus…, two easily discernible scientific movements developed: the first was critical, the second – innovative and synthetic. The critical tradition started with Nicolaus Copernicus and led directly to the work of the Danish astronomer Tyge O. Brahe alias Tycho Brahe (1546–1601) who measured stellar and planetary positions more accurately than anybody before him. But the most serious support to the heliocentric system was provided by the Italian polymath (astronomer, physicist, mathematician and philosopher) Galileo Galilei (1564–1642) after the invention of the telescope which enabled him to discover mountains on Moon, satellites circling Jupiter and spots on Sun. While Galileo was searching the heavens with the telescope, the German mathematician and astronomer Johannes Kepler (1571–1630), by applying mathematics to precise astronomical data obtained from Tycho Brahe, discovered that the orbit of Mars (and, by analogy, of the other planets) is not a circle but an ellipse, with Sun located at one of its foci. In this way, he established a mathematical model which is called today the first Kepler’s law of planetary motion. He also established two other laws which state the following: – a line segment joining a planet and Sun sweeps out equal areas during equal intervals of time; – the square of the orbital period of a planet is proportional to the cube of the semi-major axis of its orbit. Neither Johannes Kepler nor Galileo Galilei was able to answer the question: “Why do objects not fly off revolving Earth?” That question, despite numerous attempts made by both of them and some other intellectuals of their times, remained without a satisfactory answer till the times of the English physicist and mathematician Isaac Newton (1642–1726). His extraordinary experimental and mathematical talents enabled him to justify the heliocentric system and establish a new mechanics. The results of his investigations, including the three laws of motion and the principle of

4 in English: On the Revolutions of the Heavenly Spheres.

32

3 Science in historical perspective

universal gravitation, were published in the treaty Philosophiae naturalis principia mathematica (1687)5. In the sixteenth century, not only astronomy and physics but also medicine started to emancipate from natural philosophy. In 1543, i.e. in the year of publication of De revolutionibus. . ., an equally important book on anatomy appeared, viz. De humani corporis fabrica libri septem6 by Andries van Wesel (1514–1564) – the book being a critical examination of anatomical knowledge whose origin was in the findings of Aelius Galenus. It inspired several research initiatives in Italy and elsewhere that culminated in the discovery of the circulation of blood by the English physician William Harvey (1578–1657) whose Exercitatio anatomica de motu cordis et sanguinis in animalibus7 (1628) was a kind of Principia of anatomy and physiology. Harvey showed that organic processes could be explained in terms of mechanical systems: the heart and the vascular system could be considered as a pump combined with a system of pipes. The growing flood of research information in the middle of the seventeenth century implied the need for new forms of its processing and distribution. It was no longer sufficient to publish scientific results in an expensive book that only few persons could buy; information had to be spread widely and rapidly. The research results required independent and critical confirmation. New means were created to accomplish these ends. Scientific societies sprang up, beginning in Italy in the early years of the seventeenth century and culminating in the two great national scientific societies: the Royal Society of London for the Promotion of Natural Knowledge, established in 1662, and the Académie des Sciences of Paris, formed in 1666. Each of them started to be a forum for critical discussion of new discoveries and theories; each started to publish scientific articles: the Royal Society in Philosophical Transactions and the Académie des Sciences in Mémoires. New canons of reporting were established so that experiments and discoveries could be reproduced by others. This required new precision in language and a willingness to share experimental methods and techniques. The failure of others to reproduce results cast serious doubts upon the original reports.

3.2.2 From natural philosophy to sciences The mechanics, regardless of the progress in other scientific disciplines that split from natural philosophy, like optics or chemistry, maintained its priority among the sciences in the eighteenth century. Many mechanical problems were solved using

5 in English: Mathematical Principles of Natural Philosophy. 6 in English: On the Fabric of the Human Body in Seven Books. 7 in English: An Anatomical Exercise on the Motion of the Heart and Blood in Living Beings.

3.2 Classical science

33

the method of mathematical modelling, and gave birth to new chapters of mathematics. The Swiss mathematician and polymath Leonhard Euler (1707–1783) developed the infinitesimal calculus and graph theory, made pioneering contributions to topology and analytic number theory, and worked out numerical methods for solving ordinary differential equations. The French mathematicians and physicists, Jean Le Rond d’Alembert (1717–1783), Joseph-Louis Lagrange (1736–1813) and Pierre-Simon de Laplace (1749–1827), transformed the Newtonian mechanics into an axiomatic system of mathematical models, applicable for solving both astronomical and craft-related technical problems. One of the major advances in chemistry in the eighteenth century was the discovery of the role of air (and gases in general) in chemical reactions. The experiments of the Scottish physician and chemist Joseph Black (1728–1799), carried out on magnesia alba (basic magnesium carbonate) in the 1750s, showed that an air with specific properties could combine with solid substances, like quicklime, and could be recovered from them. This discovery inspired the French chemist AntoineLaurent de Lavoisier (1743–1794) to experimentally demonstrate that, in contradiction to established view that a burning body released its component called phlogiston, combustion consists in the combination of bodies with a gas that he named oxygen. The Newton’s idea of general gravity, as well as his corpuscular explanation of light, inspired numerous researchers to think about imponderable fluids – such as heat, electricity and magnetism – in terms of particles and associated forces of attraction or repulsion. One of them, the French physicist Charles-Augustin de Coulomb (1736–1806), invented a method for measuring electrical and magnetic forces, using a delicate torsion balance, and proposed a mathematical model relating two electrical charges and the distance between them to the force of their attraction or repulsion – the model called today Coulomb’s inverse-square law. For a long time, the study of living matter lagged far behind physics and chemistry, largely because organisms are significantly more complex than inanimate bodies or forces. For a long time, the students of living nature had to be content to classify living forms as best they could and to make attempts to isolate and study various aspects of living subjects. An avalanche of new specimens in botany and zoology put severe pressure on taxonomy. A giant step forward was made in the eighteenth century by the Swedish naturalist Carl von Linné (1707–1778) who introduced a rational, although somewhat artificial, system of binomial nomenclature. It inspired, however, the French biologist Jean-Baptiste de Lamarck (1744–1829) to develop a more natural system referring to an intuition that species are linked in some kind of genetic relationship. He formulated an early version of a theory of inheritance of acquired characteristics, called soft inheritance. Towards the end of the eighteenth century – when major political, economic and religious changes were taking place in European countries – scientific research began to develop at universities that were then becoming more independent of

34

3 Science in historical perspective

both church and state authorities. The university teaching began to be based on knowledge obtained via research, and this knowledge was open for criticism. More specialised and application-oriented institutions of higher learning developed, the institutions dealing with engineering, agriculture and commerce. Although the oldest technical colleges appeared in the German countries already in the middle of the eighteenth century, the first great scientific school of that type was École Polytechnique in Paris, founded in 1794 to put the results of science in the service of Napoleonian Empire. Some of the German-speaking countries followed this pattern, e.g. Technische Universität Wien was established in 1815 under the name Polytechnisches Institut.

3.2.3 From theory to practice Already by the end of the eighteenth century, the hope appeared that science and its methods, including careful observation and experimentation, might contribute to the improvement of industrial production (manufacturing). It was not, however, until the second half of the nineteenth century that science was able to provide truly significant help to industry. It was then that the science of metallurgy permitted tailoring alloy steels to industrial specifications, that the science of chemistry permitted the creation of new substances, like the aniline dyes, and that electricity and magnetism were harnessed in the electric dynamo and motor. Until that period, science probably profited more from industry than vice versa. Since industry required ever more complicated and intricate machines and devices, the machine tool industry, which developed them, also made possible the construction of sophisticated instruments for science. As science turned from the world of visible phenomena to the world of invisible phenomena – atoms and molecules, electric currents and magnetic fields, microbes and viruses, nebulae and galaxies – instruments increasingly provided the sole contact with the latter. Till the first decades of the nineteenth century, the practitioners of science were referred to as “natural philosophers”; the term scientist was introduced in 1834 by members of the British Association for the Advancement of Science to label students of nature, by analogy with the already existing term artist. The Industrial Revolution had an important impact on the institutionalisation of science. The prospect of applying science to the problems of industry served to stimulate public support for science. Governments began to support science by making financial grants to scientists, by founding research institutes and by bestowing honours and official posts on great scientists. By the end of the nineteenth century, the natural philosopher following his private interests had given way to the professional scientist with a public role. In the nineteenth century, the refinement of mathematical models, representative of forces between invisible particles, led to the development of field theory as a representation of reality. The Danish physicist and chemist Hans C. Ørsted (1777–1851) put

3.2 Classical science

35

forward a hypothesis that electricity, heat, magnetism and light must be different manifestations of the basic forces of attraction and repulsion. He showed, in 1820, that electricity and magnetism were related because the passage of an electrical current through a wire affected a nearby magnetic needle. The French physicists and mathematicians, Jean-Baptiste Biot (1774–1862) and Félix Savart (1791–1841), proposed a mathematical model relating the magnetic field, generated by an electric current, to the magnitude, direction, length and proximity of the electric current – the model called today Biot–Savart law. This fundamental discovery of the interdependence of electrical and magnetic phenomena was next explored by the English scientist Michael Faraday (1791–1867) who focused his research on converting one force into another, and in this way laid the foundations for field theory relating those phenomena. He also prepared the empirical basis for the principle of the conservation of energy, formulated and generalised on other physical phenomena by the English physicist James P. Joule (1818–1889), the German physician and physicist J. Robert Mayer (1814–1878) and the other German physician and physicist Hermann L. F. von Helmholtz (1821–1894). In the nineteenth century, the study of heat was transformed into the science of thermodynamics, the Newtonian corpuscular theory of light was replaced by a wave theory postulated by the French engineer and physicist Augustin-Jean Fresnel (1788–1827) and the phenomena of electricity and magnetism were mathematically modelled by a set of partial differential equations put forward by the Scottish mathematician and physicist James Clerk Maxwell (1831–1879). The nineteenth century brought also a spectacular progress in the domain of chemistry. First, the English chemist and physicist John Dalton (1766–1844) launched the fundamental hypothesis that atomic species differ from one another solely in their weights. It enabled the chemists to identify a considerable number of elements and to establish the laws describing their interactions. It started the theoretical development aimed at arranging elements according to their atomic weights and their reactions – which culminated in the periodic table of elements devised by the Russian chemist Dmitri I. Mendeleev (1834–1907). The theory of soft inheritance, formulated by Jean-Baptiste de Lamarck at the turn of the eighteenth century, contributed to the birth of the theory of biological evolution which reached its maturity with the works of the English naturalist and geologist Charles R. Darwin who not only amassed a wealth of data supporting the notion of transformation of species but also suggested a mechanism by which the evolution could occur, viz. the mechanism of natural selection. His key findings were published in the book On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life (1859). On the other hand, biology progressed at the microscopic level: in 1838, the German physiologist Theodor Schwann (1810–1882) and the German botanist Matthias J. Schleiden (1804–1881) announced that cells were the basic units of all living tissues. Improvements in the microscope technology made it possible to gradually disclose the

36

3 Science in historical perspective

basic structures of cells, and the rapid progress in biochemistry permitted the study of their physiology. If the social impact of scientific progress in biology is concerned, the most important contribution was made by the French chemist and microbiologist Louis Pasteur (1822–1895) and the German physician and microbiologist H. H. Robert Koch (1843–1910). The first of them showed that bacteria were the specific causes of many diseases, and sine qua non agents in the processes of fermentation, underlying production of cheese, wine and beer. H. H. Robert Koch, in turn, identified specific bacteria causing tuberculosis, cholera, anthrax and some tropical diseases; he also gave experimental support for the concept of infectious disease.

3.2.4 From steady growth to revolution At the turn of the nineteenth century, physics again attracted the attention of the scientific communities and educated groups of Western societies: it provided spectacular solutions to the problems that accumulated over the second half of the century, related to micro- and macro-world. Within a period of 10 years, 1895–1905, the mechanistic paradigm, dominating since the Newton’s times, was overthrown. The discovery of X-rays, by the German mechanical engineer and physicist Wilhelm C. Röntgen (1845–1923), and of radioactivity, by the Polish-French physicist and chemist Marie Skłodowska-Curie (1867–1934), revealed an unexpected new complexity in the structure of atoms. In 1900, the German physicist Max K. E. L. Planck (1858–1947) provided a solution to the blackbody problem formulated in 1859 by Gustav R. Kirchhoff (1824–1887) – the solution based on an assumption that the total energy of radiation is composed of indistinguishable energy portions called quanta of energy. In 1905, the German physicist Albert Einstein proposed a quantum-based explanation of the photoelectric effect (known since 1887). Further quantum effects were discovered and studied soon after by the American physicist Arthur H. Compton (1892–1962), the Indian physicist Chandrasekhara V. Raman (1888–1970) and the Dutch physicist Pieter Zeeman (1865–1943). At the same time, starting with the nuclear model of the atom proposed by the New-Zealand physicist Ernest Rutherford (1871–1937), the Danish physicist Niels H. D. Bohr (1885–1962) developed a theory of the atomic structure, referring to the quantum concepts of energy radiation. In this way, a new chapter of physics, called quantum mechanics, was opened. Numerous physicists all over the world contributed to its further development, i.a.: Werner K. Heisenberg (1901–1976), Louis V. P. R. de Broglie (1892– 1987), Erwin R. J. A. Schrödinger (1887–1961), Max Born (1882–1970), Enrico Fermi (1901–1954), Wolfgang E. Pauli (1900–1958), Max T. F. von Laue (1879–1960), Satyendra N. Bose (1894–1974) and Arnold J. W. Sommerfeld (1868–1951). It reached a state of maturity by 1930 when the mathematicians, David Hilbert (1862–1943), Paul Dirac (1902–1984) and John von Neumann (1903–1957), unified it and gave it

3.3 Technoscience

37

an elegant mathematical form. The story of quantum mechanics demonstrates a phenomenon, sometimes called twentieth-century scientific revolution, which consisted in quite unusual acceleration of the advancement of science.

3.3 Technoscience The story of the development of quantum mechanics is the best illustration of the fact that at the very beginning of the twentieth century, science started to be a collective and cosmopolitan enterprise. Although great personalities continued to play the decisive role in advancing science, the final research outcomes were getting more and more dependent on the harmonised effort of growing collectives composed of both scientific and technical staff. This turned out to be a long-run tendency leading to the situation that articles published in the domain of nuclear physics are often co-authored by hundreds of persons. The twentieth-century scientific revolution was not limited to the appearance of quantum mechanics; it was marked by many other spectacular events to mention only a few: – In 1905, Albert Einstein announced the special theory of relativity which challenged the concepts of absolute time and space, underlying the Newtonian mechanics, and established the equivalence of mass and energy. – A decade later, he published the general theory of relativity, providing a unified description of gravity as a geometric property of space–time; introducing, in particular, the concept of its curvature around the massive bodies like Sun or other stars. – The discovery of basic atomic constituents – electrons (1897), protons (1917), and neutrons (1932) – initiated the development of the so-called standard model of the particle physics, comprising today 24 elementary particles. – The American astronomer Edwin P. Hubble (1889–1953) provided evidence that the recessional velocity of a galaxy increases with its distance from Earth, which means that the Universe is expanding. This was a substantial observation supporting the so-called Big Bang hypothesis put forward by the Belgian astronomer Georges H. J. E. Lemaître (1899–1966) in 1927. The twentieth-century revolution in physics and cosmology was followed by the twenty-first-century revolution in chemistry and biology: chemists of today are shaping molecules at will; genetic engineers make possible intervention in the evolution of Homo sapiens. The chronological tables of highlight events in those sciences, provided in the Appendix, contain numerous achievements that happened in the second half of the twentieth and in the twenty-first century. They have been validated up to certain extent by the Nobel Prizes and other internationally recognised

38

3 Science in historical perspective

scientific distinctions, but it is too early to provide an unbiased evaluation of their importance for further development of science. The twentieth-century scientific revolution implied radical and substantial changes to the structure and institutions of science. The process of specialisation – which for 250 years (from the seventeenth to the nineteenth century) led to the separation of physics, chemistry, biology and other empirical sciences from natural philosophy – accelerated at the turn of the nineteenth century, atomising both science and the community of researchers. Although the number of researchers grew exponentially, the number of those who had intellectual insight in more than one narrow discipline decreased dramatically. The process of “democratisation” of science, due to its massification, implied the rapid transformation of scientific vocation into scientific profession: the number of researchers whose involvement in research is motivated by financial rather than epistemological reasons grew enormously, especially in the second half of the twentieth century. During the last 50 years, the science-related institutions have undergone a transformation whose essence may be roughly summarised as follows: “from pursuit of truth to pursuit of money”. This happened for both internal and external reasons. The first group of them is related to the growing practical importance of science: – Because of increasing number of practical applications, science started to be an important factor of economic growth and welfare of society. – Two World Wars demonstrated that science is a key factor of global security and survival of Homo sapiens. – The same conclusion has been drawn from the experience of natural catastrophes and devastation of natural environment by human activity. The second group of reasons is related to the response of society to growing importance of science: – The scope of research problems, undertaken by research institutions, has been more and more determined by practical needs of industry and other sectors of economy: it has been shaped by politicians and businessmen rather than by curiosity-driven researchers. – New research institutions have been established at the national level, e.g. National Aeronautics and Space Administration in USA, and at the international level, e.g. European Organisation for Nuclear Research – CERN8, to carry out interdisciplinary investigations of the highest priority for the future of Homo sapiens. – Powerful research-and-development centres have been founded at the largest companies such as DuPont or Motorola, the units involved not only in investigation

8 This acronym is derived from the initial name of that institution: Conseil Européen pour la Recherche Nucléaire.

3.3 Technoscience

39

works directly oriented on the manufacturing needs but also in long-term strategic studies. – Elaborate systems for financing research, as well as sophisticated systems for quantitative evaluation of the productivity of research institutions and individual researchers, have been introduced at all levels of research financing to increase their scientific productivity. – In parallel with institutional (professional) science, the so-called citizen science9 has developed, especially in such domains as astronomy, entomology, ornithology, seismology and internet technologies. All those organisational innovations and adjustments contributed to the transformation of science into technoscience controlled by industrial and political institutions rather than by scientists themselves. The term technoscience appeared for the first time in 1953 in the work of the French philosopher Gaston Bachelard (1884–1962), but it was made popular by the Belgian philosopher Gilbert Hottois (*1946) only in the 1980s. It is labelling the result of integration of science and technology, progressing since nineteenth century – thus, the global system which includes academic and industrial research institutions, manufacturing companies and service providers, local and international agencies administering tests, local and international organisations dealing with various aspects of global development. In this system, the style of management in industrial enterprises, because of the growing demand for scientific knowledge, is getting more and more similar to the style of management in research institutions. At the same time, the style of management in academic institutions increasingly recalls the style of management in enterprises, since those institutions need constantly more research infrastructure which cannot be financed from public sources. Consequently, an industrial company is starting to be a large laboratory of a scientific institution, and that institution – a research-and-development centre of this company. The style of governing such a tandem, as a rule, is closer to the style typical for business than for traditional research institutions. The schematic periodisation of the history of Western science, presented in in this chapter, is rather conventional since it is impossible to indicate any strictly defined intervals of transition from protoscience to classical science and from classical science to technoscience. Certain research activities characteristic for the period of protoscience, e.g. alchemy and astrology, were not discontinued in the seventeenth century, and some of them are continued even today. Considerable amount of money was, for example, spent in the first half of the twentieth century

9 cf. the Wikipedia article “Citizen Science” available at https://en.wikipedia.org/wiki/Citizen_sci ence [2018-07-18].

40

3 Science in historical perspective

on alchemist research aimed at transformation of inexpensive metals, such as iron or lead, into gold. The list of most famous cases includes mainly German names, among them the name of Franz S. Tausend (1884–1942) whose extravagant projects were generously financed by Nazi politicians10.

10 F. Wegener, Der Alchemist Franz Tausend: Alchemie und Nationalsozialismus, Kulturförderverein Ruhegebiet, BRD 2013.

4 Philosophy of science in historical perspective Philosophy of science is a philosophical discipline concerned with all assumptions, foundations, methods and implications of science, including applications of its findings in socio-economic practice. Its scope overlaps with epistemology, for example, when it explores the relationship between scientific knowledge and truth. Philosophy of science as an autonomous discipline is a product of the twentieth century. On the one hand, its development was influenced by the great intellectual challenges of the quantum physics and relativity theories; on the other, it was modulated by philosophical issues of the theory of evolution, psychoanalysis and Marxist economics1.

4.1 Philosophy of protoscience Since the scientific method is an invention of Ancient Greece, it is reasonable to start the history of philosophy of science from the Athenian philosopher Plato who introduced a fundamental distinction between the realm of material things, which is perceptible but unintelligible, and the realm of ideas, which is imperceptible but intelligible. According to him, only the latter could be objects of certain knowledge acquired by means of deductive reasoning. His disciple Aristotle disagreed: without discarding the role of reasoning in discovering the fundamental principles of the natural world, he stressed the importance of careful observation as a starting point of any cognitive process; he stated that knowledge is attained by accumulation of empirical facts, their ordering and systematic presentation. Aristotle also indicated the need for causal explanation of the studied phenomena and processes; he distinguished material causes, formal causes, action-related causes and purpose-related causes. The treatise Organon2 is a summary of his approach to philosophy of science, which remained the authoritative reference in the tradition of the scientific method till the sixteenth century. During the medieval and post-medieval period, such figures as Albertus Magnus (1206–1280), Saint Thomas Aquinas, Roger Bacon (1220–1292), William of Ockham (1287–1347), Andreas Vesalius (1514–1546) and Giacomo Zabarella (1533–1589) tried to interpret it – to explain, in particular, what kind of knowledge could be obtained by means of observation and induction; they also attempted to find the justification of induction and to establish the rules for its application.

1 P. W. Humphreys, “Science, Philosophy of”, [in] Encyclopedia of Bioethics (Ed. W. T. Reich), Simon & Schuster, New York 1995, pp. 2333–2338. 2 in English: Instrument. https://doi.org/10.1515/9783110584066-004

42

4 Philosophy of science in historical perspective

4.2 Philosophy of classical science In the period of sixteenth–eighteenth centuries, accelerated advancement in acquisition of knowledge about the natural world was accompanied by intense reflection on the method by which that advancement had been achieved. The struggle to establish the new authority included methodological moves. The conviction – most explicitly expressed by Galileo Galilei – that “the book of nature” had been written in the language of mathematics, motivated the researchers to put an emphasis on mathematical description and mechanical explanation of natural phenomena. In the treatise Il Saggiatore3 (1623) Galileo Galilei initiated an approach to research methodology which is called today hypothetico-deductivism. He indicated mathematical modelling as a most adequate tool for describing the laws of nature; stressed the role of reason and imagination (thought experiments) in the elaboration of hypotheses; demonstrated the key role of physical experiments and deductive reasoning in verification of hypotheses; and emphasised the role of measurement in physical experiments. At the same time, the English statesmen and thinker Francis Bacon (1561–1626) published the treatise Novum Organum Scientiarum4 (1620), where he criticised the Aristotelian approach for overrating deductive reasoning, and postulated a new approach stressing the role of active experimenting and inductive reasoning. He is, therefore, considered to be the father of modern empiricism whose essence is in systematic observation and collection of data, both supported by instruments allowing for extension and correction of our senses, in order to reduce systematic errors and cognitive distortions, and in the use of inductive inference for formulation of general claims about objects of cognition. Francis Bacon was not a researcher himself (he spent a major part of his life in the state service), but he was a deep thinker, excellent speaker and essayist. This may explain the fact that his work influenced the development on philosophy of science to a greater extent than the research practices of the seventeenth-century scientists such as William Harvey or Robert Boyle (1627–1691). The latter criticised the Baconian empiricism for being too rigid – too remote from the real research practices of that time. Empiricism was also criticised from the rationalists’ position, mainly by the French philosopher and mathematician René Descartes (1596–1650) who indicated reason as the reliable source of scientific knowledge (rationalism), emphasised the role of deductive inference in scientific inquiry, and stressed the importance of systematic thinking. In the treatise Discours de la méthode5 (1637), he presented four precepts concerning scientific inquiry, among which the first was the most important one: never accept anything for true which is not clearly known to be such.

3 in English: The Assayer. 4 in English: New Instrument of Science. 5 in English: Discourse on the Method.

4.2 Philosophy of classical science

43

It was, however, neither Francis Bacon nor René Descartes, but Isaac Newton, the research practitioner and philosopher at the same time, who most influenced the research practice of the eighteenth century. He presented his methodological considerations together with his theories of mechanics and cosmology in the treatise Philosophiæ naturalis principia mathematica (1687), and together with his theories of optics in the book Opticks: Or, a Treatise of the Reflexions, Refractions, Inflexions and Colours of Light (1704). In particular, in the Book III of the 1713 edition of Principia…, he defined four rules of scientific reasoning which have preserved their methodological significance up to now. The first of them, called today the principle of parsimony, states that the simplest explanation is generally the most likely. The second rule recommends avoiding special interpretation of experimental data if a reasonable general explanation already exists; the third suggests that general explanation of a phenomena under study should apply to all instances of that phenomenon; and the fourth postulates to hold a scientific theory to be true unless it is getting problematic in light of accumulated evidence. The latter rule indicates that Isaac Newton was open to future corrections or improvements of his laws of physics. On the other hand, his famous saying “Hypotheses non fingo”6 seems to suggest that he was convinced about the certainty of knowledge obtained on the basis of inductive reasoning based on empirical data. During the eighteenth century, the Newtonian methodology called inductivism was subject to significant clarifications and improvements made by numerous thinkers and research practitioners. The Scottish mathematician Colin MacLaurin (1698–1746), for example, elaborated on the procedure of using scientific generalisations for explaining new phenomena. The French Encyclopaedists, i.e. the authors and editors of Encyclopédie ou dictionnaire raisonné des sciences, des arts, et des metiers7, did much to consolidate and popularise the Newtonian methodology, and consequently, to transform it into a revolutionary force of the Enlightenment. At the same time, the Newtonian inductivism was subject to rationalists’ criticism. The German polymath and philosopher Gottfried W. Leibniz (1646–1716), in his short treatise Meditationes de cognitione, veritate et idels8 (1684), emphasised the importance of formal logic in research work; endorsed the principle of sufficient reason, which states that everything must have a reason or a cause, as well as the principle of continuity, which states that nature never makes leaps; he also argued that scientific conclusions could not be reached by observations alone, but had to be based on the application of reason to those observations and prior scientific theories.

6 in English: “I frame no hypotheses”. 7 in English: Encyclopedia, or a Systematic Dictionary of the Sciences, Arts, and Crafts. 8 in English: Meditations on Knowledge, Truth, and Ideas.

44

4 Philosophy of science in historical perspective

The Scottish philosopher, historian and economist David Hume (1711–1776) provided a systematic and critical analysis of induction in general. In the treatise An Enquiry Concerning Human Understanding (1748), he pointed out that: – Causal relations are found not by deductive but by inductive reasoning because multiple effects may follow any cause, and the actual effect cannot be determined by deductive reasoning about that cause. – It is not necessary that present causal relations resemble causal relations in the past, but we paradigmatically assume that they do9. – On the basis of the repetitive observations that a state or phenomenon C (a hypothetical cause) is followed by a state or phenomenon E (an observed effect), we gather that this will always happen; such a conclusion is again justified by inductive reasoning. – Thus, the inductive reasoning is not only providing uncertain conclusions, but it is doubtful as a method of cognition. The Hume’s analysis of induction motivated the Prussian philosopher Immanuel Kant (1724–1804) to seek new foundations for the scientific method by overcoming the opposition between empiricism and rationalism. In the treatise Kritik der reinen Vernunft10 (1781), he stated that: – Neither senses alone nor thoughts alone can ever exceed the natural human limitations and see the world in itself. – Senses enable the mind to lessen its limitations, and the mind is extending faculties of the senses (e.g. scientific instrumentation). – Thus, rationalism without a contribution of empiricism cannot be correct and vice versa. Both David Hume and Immanuel Kant significantly influenced the methodological reflection of the next century, in particular – the famous debate between two English thinkers, John S. Mill (1806–1873) and William Whewell (1794–1866), over the certainty (or uncertainty) of inductive inferences in science. The essence of that debate has been frequently reduced to the confrontation of inductivism (John S. Mill) with hypothetico-deductivism (William Whewell), although its final account from the today’s perspective seems to indicate that the opponents differed in the distribution of the accents rather than in the principles the scientific method should be based upon. John S. Mill, in his book A System of Logic, Ratiocinative and Inductive (1843), stressed the role of induction as a tool for searching regularities in nature, and therefore in identification of scientific laws. He also emphasised the importance of the

9 This assumption is called today the principle of the uniformity of nature. 10 in English: Critique of Pure Reason.

4.2 Philosophy of classical science

45

causality principle in that process and postulated the use of five methods for identifying causal relationships. These methods consist in detecting circumstances which are common among the phenomena under study, which are absent among the phenomena, and those which are varying together. Three of those methods – viz. the method of agreement, the method of difference and the method of concomitant variation – were first described by the Persian polymath Avicenna in his book The Canon of Medicine (1025); the remaining two – viz. the method of residues and the joint method of agreement and difference – were first proposed by John S. Mill. William Whewell emphasised the role of hypotheses and deduction in the research process, but he did not ignore the role of induction. He pointed out that knowledge is due to some objective factors (what we see in the world around us) and of some subjective factors (how we perceive and understand our experience); both groups of factors and their interplay are essential for the productivity of scientific research. From this perspective, he criticised Immanuel Kant for overestimating subjective factors, and John S. Mill for excessively focusing on the role of senses. In the book The Philosophy of the Inductive Sciences, Founded Upon Their History (1840), William Whewell proposed three steps of inductive reasoning in science: (1) the selection of fundamental ideas, such as space, number, cause or likeness (resemblance); (2) the formation of a specialised conception related to those ideas, such as a circle or a uniform force; and (3) the determination of the magnitudes of related quantities. He pointed out the importance of the inventiveness in the “discoverer’s induction” and of the effectiveness of quantitative methods – such as the method of curves, the method of means, the method of least squares or the method of residues – in testing the hypotheses. The advancement of specialisation in the nineteenth-century science enriched the diversity of perspectives the scientific method was viewed of. The representatives of medical and social sciences joined philosophers and physicists in their search for the definition of an ideal research methodology. The French physiologist Claude Bernard (1813–1878) was probably most prominent among them. In the book Introduction à l’étude de la médecine expérimentale11 (1865), he redefined in some way the scientific method by indicating that: (1) experimental research consists in a constant interchange between induction and deduction; (2) scientific explanation consists in a disclosure of cause-and-effect relationships which (at least in biology and medicine) should be deterministic; (3) falsification is a key method of selection of new information to be integrated in the body of knowledge. According to Claude Bernard, experimental science should be neutral with respect to philosophy. The maturing of the scientific method in the nineteenth century supported the positivist orientation of epistemology, i.e. understanding of knowledge following the

11 in English: An Introduction to the Study of Experimental Medicine.

46

4 Philosophy of science in historical perspective

postulates of phenomenalism and nominalism12. Phenomenalism is the view, which can be traced back to the subjective idealism of the Irish philosopher George Berkeley (1685–1753), that physical objects do not exist in themselves, but only as perceptual phenomena or sensory stimuli – such as redness, hardness or sweetness – located in time and space. On the other hand, nominalism is the view, which can be traced back to the English scholastic philosopher and theologian William of Ockham, that only individuals and no abstract entities (such as essences, classes or propositions) exist. The positivists stated that: (1) all knowledge regarding matters of fact is based on the “positive”13 data resulting from experience; (2) beyond the realm of facts, there is only the realm of pure logic and mathematics. They repudiated metaphysics, i.e. any kind of speculation on reality that goes beyond evidence supporting or refuting such “transcendent” knowledge claims14. The first mature version of positivism may be identified in the works of the French philosopher I. M. Auguste F. X. Comte (1798–1857) who named and systematised the scientific discipline called sociology. During the last 150 years, positivism evolved through several stages – known under the names of empiriocriticism, logical positivism and logical empiricism – to finally merge in the AngloSaxon analytic philosophy by the middle of the twentieth century. Before it happened, the American philosopher, logician and mathematician Charles S. Peirce (1839–1914) proposed the pragmatic model of scientific inquiry. In a series of articles entitled “Illustrations of the Logic of Science”, published in the Popular Science Monthly magazine (1877–1878), he declared that the aim of scientific inquiry is to get rid of inhibitory doubts, and to reach secure beliefs enabling us to act (regardless of their epistemic relationship to reality). He stated that this aim may be attained by generation of explanations resulting from the coordinated use of deductive, inductive and abductive inference.

4.3 Philosophy of science in the first half of the twentieth century As it was noted in Chapter 3, the beginning of the twentieth century abounded in scientific breakthroughs in physics: the Einstein’s relativity theories and quantum mechanics caused both scientists and philosophers to deepen their reflection on the nature of the physical world, and especially on the nature of human knowledge concerning the physical world. On the other hand, a significant progress was attained in the domain of formal logic and its application for clarifying the foundations of

12 L. Kołakowski, Filozofia pozytywistyczna, Wyd. Nauk. PWN, Warszawa 2009. 13 The term was derived from the French adjective positif whose philosophical meaning is “imposed on the mind by experience”. 14 cf. the Encyclopaedia Britannica article “Positivism” available at https://www.britannica.com/ topic/positivism [2018-07-17].

4.3 Philosophy of science in the first half of the twentieth century

47

mathematics. The German logician and mathematician F. L. Gottlob Frege, in the book Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens15 (1879) proposed a rigorous treatment of the ideas of functions and variables, and formalised axiomatic predicate logic. That book and his other works were seminal for the contributions made by two British philosophers and mathematicians, Bertrand A. W. Russell and Alfred N. Whitehead (1861–1947), who provided the logical foundations of mathematics in the three-volume book Principia Mathematica (1910–1913), as well as by the Austrian logician and mathematician Kurt F. Gödel (1906–1978) – the author of the incompleteness theorems (1931) – and the Polish logician and mathematician Alfred Tarski (1901–1983) – the author of the theory of truth in formalised languages (1933). In 1920, the German mathematician David Hilbert (1862–1943) put forward a research programme aimed at formalisation of mathematics on the basis of a finite and consistent system of axioms. Despite the fact that this goal turned out to be unattainable due to the incompleteness theorems, the programme itself significantly contributed to the unification of mathematics, and inspired scientists and philosophers of science to think about logic as an ideal language for modelling cognitive processes.16 The above outlined situation of logic and mathematics, by the end of the nineteenth century and in the first decades of the twentieth century, was a background of the neo-positivist movement called logical positivism and (later) logical empiricism. It was mainly represented by two groups of philosophers: Wiener Kreis17 and Berliner Kreis18. The first of them, chaired by F. A. Moritz Schlick (1882–1936), was active in the period 1918–1934, and brought together scientists and philosophers such as Rudolf Carnap (1891–1970), Herbert Feigl (1902–1988), Philipp Frank (1884–1966), Kurt F. Gödel, Richard von Mises (1883–1953), Otto Neurath (1882–1945) and Edgar Zilsel (1891–1944). The philosophers and mathematicians such as Hans Reichenbach (1891–1953), Kurt Grelling (1886–1942), Walter Dubislav (1895–1937), Carl G. Hempel (1905–1997) and David Hilbert were the most prominent members of the second neo-positivist group, active in the period 1920–1934. Logical positivism stated that scientific knowledge is the only kind of factual knowledge, and its ultimate basis rests upon collective experimental verification (or confirmation) rather than upon personal experience. It held that metaphysical doctrines are meaningless because the questions about substance, causality, freedom and God are unanswerable due to the limitations of our language; the only meaningful answers may be given to the

15 in English: Concept-Script: A Formal Language for Pure Thought Modelled on that of Arithmetic. 16 P. Machamer, “A Brief Historical Introduction to the Philosophy of Science”, [in] The Blackwell Guide to the Philosophy of Science (Eds. P. Machamer, M. Silberstein), Blackwell, Malden (USA) – Oxford (UK) 2002, Chapter 1. 17 in English: the Vienna Circle. 18 in English: the Berlin Circle.

48

4 Philosophy of science in historical perspective

questions concerning nature. This is the aim of science, and all genuine knowledge about nature can be expressed using the language of logic and mathematics, common to all the sciences. After the Hitler’s rise to power, the majority of European neo-positivists emigrated to the USA, where their epistemological project evolved till the mid-century when it lost its importance. Some of its elements, however, have survived till today. We still try to follow their recommendations concerning the scientific claims about the world: (1) to make them clear, free of ambiguity and other confusions inherent in natural language; (2) to verify their validity using procedures and criteria agreed by the scientific community19. The positivists, who followed the empiricist tradition, thought that the basis of verification lays in observation- and experiment-based tests that made science reliable and different from other types of knowledge claims. What was needed for this purpose was a set of sentences that bridged the gap between scientific theories and experiments or observations. Those sentences, called reduction sentences, constituted the observation language; they were assumed to be easily verifiable, i.e. classifiable as true or false. The bridge sentences were organised in an axiomatic structure, in which all their logical relations and deductions from them could be made explicit. The most important sentences in a scientific theory were called laws of science. Two types of laws were distinguished: universal and statistical. The first, also called causal, had unrestricted application in space and time; the latter provided only more or less probable conclusions about the natural world. A scientific explanation, called deductive-nomological explanation20, was understood as a particular sentence deduced from a universal law. A particular sentence – deduced from a theory before a fact was observed, and verified by an experiment or observation – was considered to confirm that theory.21 The mature version of the deductive-nomological model of scientific explanation was presented by one of the members of Berliner Kreis, Carl G. Hempel, in the article Studies in the Logic of Confirmation (1943, 1945). He stated there that: – a phenomenon is explained if it can be viewed as a logical consequence of a law of nature; – a scientific explanation is a deductive argument with at least one natural law among its premises; – what is to be explained is called explanandum; the premises stated to explain are called explanans; and – a natural law is a generalisation having the form of a conditional proposition (if A, then B) and being empirically testable.

19 P. Machamer, “A Brief Historical Introduction to the Philosophy of Science”, 2002. 20 from Greek nomos = “law”. 21 P. Machamer, “A Brief Historical Introduction to the Philosophy of Science”, 2002.

4.4 Philosophy of science in the second half of the twentieth century

49

The name of Carl G. Hempel is also associated with the Paradox of the Ravens, demonstrating that the use of empirical evidence for confirmation of a hypothesis may be a risky operation from logical point of view. If one has two logically equivalent hypotheses: “all ravens are black” (H0 ) and “all non-black objects are non-ravens” (H1 ), then one may conclude that the observation of a white shirt in a wardrobe confirms the hypothesis H0 because it confirms the hypothesis H1 . This is a logical consequence of the fact that both hypotheses, H0 and H1 , are equivalent. This means that by appropriate logical manipulations, one can “confirm anything by anything”. The model of science, postulated by logical positivism, never worked in research practice, but inspired many important debates about the language of science, the relations of explanation and confirmation, the formulation of the verification principle, the nature of observations, etc. Its creative potential, however, atrophied during the 1940s and 1950s despite systematically introduced improvements. Already in the 1930s, the Austrian-British philosopher Karl R. Popper (1902–1994) questioned the principle of verification. Within the framework of critical rationalism, he stated that: – empirical evidence can only be used to discard ideas, not to support them; – ergo, not verification, but falsification is the fundamental tool of selection of ideas; – falsification is based on deductive inference rather than on inductive reasoning; – falsifiability is a key criterion of demarcation between science and non-science; and – organised criticism (scepticism) is a key virtue of scientific community. Karl R. Popper broadly explained his approach to philosophy of science in the book Logik der Forschung, published in 1934, and republished in English, under the title The Logic of Scientific Discovery, in 1959. He stated there that the aim of science is to produce tested and non-falsified theories having greater universality and richer informational content than their predecessor theories addressing the same subject. Thus, his concept of the aim of science was focused on the growth of scientific knowledge.22

4.4 Philosophy of science in the second half of the twentieth century While the neo-positivists focused on normative approach to philosophy of science, another group of philosophers began to look at the history of science: at actual

22 T. J. Hickey, “Philosophy of Science: An Introduction”, Manuscript published on the internet, 2016, http://www.philsci.com/index.html [2016-12-16], p. 75.

50

4 Philosophy of science in historical perspective

practices of science. They used historical and current case studies for pointing to drawbacks of the idealised positivistic models of science. They argued, in particular, that the observation language could not be independent of the theoretical language because the terms of the observation language were taken from the scientific theory to be tested: thus, all observation was theory-laden. They also doubted about the need for shaping all scientific theories as axiomatic systems since even theories of physics, e.g. classical mechanics, had been useful tools of scientific explanation long before their axiomatisation. The critics of the neo-positivism doubted, moreover, about the role of deduction in the process of scientific explanation because various attempts to formulate the deductive-nomological model of explanation in terms of necessary and sufficient conditions failed: on the one hand, counter-examples were found, on the other, actual practice of scientific explanation turned out to be more complex.23 The American philosopher Henry N. Goodman (1906–1998), in his book Fact, Fiction, and Forecast (1955), returned to the David Hume’s claim that inductive reasoning is based solely on human habits and regularities to which our day-to-day existence has accustomed us. He noted that some regularities establish habits while some do not, and pointed out the difficulty of differentiating between those two kinds of regularities in attempted construction of law-like statements (this problem is called New Riddle of Induction). He provided a partial solution to this problem by explaining the Hempel’s Paradox of the Ravens in the following way: – In the absence of any background (a priori) information, the observation of a white shirt in isolation does confirm the hypothesis that all ravens are black. – At the same time, it does confirm the hypothesis that all black things are ravens, and that everything what is a non-black is a non-raven or that there are no black things which are not ravens. – However, we know that these generalisations are false because they do not fit in with our entire evidence; hence, we must not ignore background information.24 By the end of the 1950s, the American philosopher of science Norwood R. Hanson (1924–1967) advanced the argument that observation is theory-laden – that observation language and theory language are deeply interwoven. In his work Patterns of Discovery (1958), he demonstrated that what we see and perceive is filtered by our preconceptions; he also used the Charles S. Peirce’s notion of abduction for explaining how scientific discoveries take place25. More influential than the Norwood R. Hanson’s contribution to philosophy of science, however, turned out to be the concepts of paradigm shift and scientific revolution, introduced by Thomas S. Kuhn (1922–1996) – the 23 P. Machamer, “A Brief Historical Introduction to the Philosophy of Science”, 2002. 24 ibid. 25 S. VenuKapalli, “The Genesis of a Hypothesis: Did Hanson Win the Battle and Lost the War?”, IOSR Journal of Research & Method in Education, 2013, Vol. 1, No. 2, pp. 48–51.

4.4 Philosophy of science in the second half of the twentieth century

51

American historian and philosopher of science with educational background in physics – whose book The Structure of Scientific Revolutions (1962) has been published and republished up to now in 16 languages. In this book, he introduced the concept of scientific paradigm which in its mature version comprises four components: – The symbolic generalisations, i.e. those fundamental laws and principles of a scientific discipline that underpin all theoretical work in this discipline, such as the laws of genetic replication or the principle of natural selection of species in biology. – The metaphysical specification of the subject-matter of the discipline, such as atomistic or field-theoretical assumptions in physics or a commitment to specifically mental properties as opposed to material properties in psychology. – The value commitments answering the questions about what constitutes an acceptable piece of evidence in the discipline, what are its goals and what are the relevant ethical standards to be followed. – The exemplification of the fruitfulness of the first three elements, a list of quintessential successes (of that discipline) which may be attributed to them.26 According to Thomas S. Kuhn: – The history of science can be divided into longer periods of its “normal” development and shorter periods of its “revolutionary” development. – During the periods of normal development of science, the scientists develop new theories or improve existing ones relaying on an accepted scientific paradigm. – During the periods of revolutionary science, anomalies refuting the accepted theories force the scientists to change the paradigm. – Since experiments always rely on some background theories which are rooted in a specific paradigm, it is impossible to isolate the hypotheses under test from the influence of those theories and that paradigm. During a scientific revolution, i.e. an abrupt change of the paradigm, even the very meaning of the same terms may change, e.g. the meaning of mass in the Newtonian mechanics is different from the meaning of mass in the Einsteinian mechanics. Because of the incommensurability of paradigms, scientific revolutions lead to schisms in the development of science, with a resulting loss of the notion of scientific progress. Since the successive paradigms cannot be compared on a uniform scale, technological impact of research findings, without necessary commitment to the truth, is used as a replacement criterion for comparing alternative paths in the development of science27. The main methodological implication of the claims made

26 P. W. Humphreys, “Science, Philosophy of”, 1995. 27 ibid.

52

4 Philosophy of science in historical perspective

by Thomas S. Kuhn was that philosophers of science should confine their study of science to a historically dominant paradigm – not look for more general, meta-paradigmatic models of its development. Many philosophers criticised the idea of scientific revolutions for at least two reasons: – The concept of paradigm is far too vague; when trying to define it more precisely, one discovers that its historical exemplifications are far more varied and complex than those referred to in the book The Structure of Scientific Revolutions. – A closer historical analysis of the most cited paradigms reveals that they tend to reduce to overlapping less important paradigms, and the periods of the normal development of science seem to be like sequences of smaller revolutions separated by relatively short periods of steady development of science. By the end of the 1960s, the Hungarian philosopher of science Imre Lakatos (1922–1974) made an attempt to reconcile the Popper’s critical rationalism, including its key idea of the rigorous falsification, with the Kuhn’s concept of normal science. In the article “Falsification and the Methodology of Scientific Research Programmes” (1970), he suggested that a consistent set of theories, called research programme, should be subjected to scrutiny rather than an isolated theory. He argued that: – A research programme is based on some irrefutable assumptions surrounded by a “protective belt” of auxiliary hypotheses. – If one of the theories fails, the set of auxiliary hypotheses is revised rather than the set of irrefutable assumptions. – The programme is considered productive as long as the revision of auxiliary hypotheses is sufficient to explain anomalies. – A theory is pseudoscientific if it fails to make any novel predictions of previously unknown phenomena (like the Freud’s psychoanalysis, the Lysenko’s biology, astrology or the Darwin’s theory of evolution). The American epistemologist and philosopher of science Larry Laudan (*1941) developed an important alternative to the research programmes of Imre Lakatos, viz. the concept of research traditions. In the book Progress and Its Problems: Towards a Theory of Scientific Growth (1977), he defined a research tradition as a set of general assumptions about the entities and processes in a given domain of science and about the appropriate methods to be used for investigating the problems and constructing theories in that domain. Such research traditions should be seen as historical entities created and articulated within a particular intellectual environment – the entities which emerge, grow and disappear. The change in science is driven, according to Larry Laudan, mainly by problem solving. Changes within a research tradition may be minor modifications of narrow specific theories – such as modifications of boundary conditions, corrections of constants, refinements of terminology – or expansion of a theory aimed at encompassing new discoveries. Such changes solve empirical problems which by Thomas S. Kuhn would be considered as anomalies. Changes within a

4.4 Philosophy of science in the second half of the twentieth century

53

research tradition might also involve its core elements when severe anomalies cannot be eliminated by modifying specific theories within that tradition. As a rule, such situations are symptoms of a profound conceptual problem whose solution requires deeper adjustments of the methodology of that research tradition. The resolution of conceptual problems may even result in a theory with lesser empirical support, which is considered as more progressive because it enables more effective problem solving. For Larry Laudan, there are no untouchable elements within a research tradition – everything is open to change over time. For example, absolute time and space in the Newtonian physics lost its paradigmatic status at the beginning of the twentieth century. If the problem-solving apparatus of a research tradition undergoes deep transformations, a new research tradition may emerge as the result of this process, but the replacement of a research tradition with another seems to be both arbitrary and open-ended. In the 1960s, Anglo-American philosophers, who considered logical positivism as an anachronism, focused on the study of real scientific language and inspirations coming from the history of science. The results of their work were “invigorating, exciting and devastating”28 for philosophy of science. The extrapolation of the models of scientific change, identified on the basis of the developments of physics, on other sciences turned out to be impossible. The Austrian-American philosopher of science Paul K. Feyerabend (1924–1994) recognised this situation as evidence that science had no identifiable structure, and expressed the view that in science, like in art, “anything goes”. In his books Against Method (1975) and Science in a Free Society (1978), he defended the idea that there is no single methodological rule which is always used by all scientists. He objected to any attempt to impose such a rule on the scientific community, arguing that it would limit their creativity, and consequently, slow down the scientific progress. For this reason, his approach to philosophy of science is called epistemological anarchism. According to Paul K. Feyerabend, all scientific evidence and proof is mainly rhetorical, and the rhetoric skills of scientists have decisive impact on the acceptance of their theories. His views have not found many adherents among science practitioners and philosophers of science, but they have been still quite popular among contemporary representatives of some academic disciplines, mainly humanities, pretending to be scientific. Nevertheless, the claims of epistemological relativism – in the atmosphere of cultural and moral relativism, which grew in the 1970s and 1980s – provoked worries and doubts of society about the values of science and its status. An attempt to rehabilitate some ideas of logical positivism (empiricism) was made in the 1980s by the American philosopher of science Bastiaan C. van Fraassen (*1941) who named his approach constructive empiricism. In the book The Scientific Image (1980), he argues that the theories are not literally true, i.e. they do not give any

28 P. Machamer, “A Brief Historical Introduction to the Philosophy of Science”, 2002.

54

4 Philosophy of science in historical perspective

account of the physical world; all we can require of them is empirical adequacy. According to him, a theory is empirically adequate if and only if everything that it says about observable entities is true, regardless of what it says about unobservable entities. In his next book Laws and Symmetry (1989), Bastiaan C. van Fraassen addressed the problem of underdetermination, i.e. the problem implied by the fact that the same evidence (the same set of empirical data) may support alternative theories, and all of them can be logically maintained when confronted with new evidence.

4.5 Recent trends in philosophy of science The logical positivists (empiricists) were those who succeeded in placing science in the centre of philosophical debates. They tried to follow the empiricist tradition, according to which all genuine knowledge must be reducible to knowledge obtainable by empirical methods and, ultimately, to that obtainable through the human sensory apparatus. They enhanced, however, this empiricist view with a deep concern for logical analyses of philosophical concepts. The interest for general issues of philosophy of science was continued in the years 1950–1980 when the positivist legacy was subject to substantial re-evaluation and revision; since 1980 those general issues have been downplayed in favour of issues specific to individual sciences29. Recent decades have not brought significant breakthrough in philosophy of science. Its present panorama encompasses a combination of different methodological orientations: the development of post-positivist and neo-pragmatic ideas has been continued, and some new options have appeared. If the latter are concerned, the new conceptualisations associated with naturalist approaches, with scientific realism and with application of probabilistic methods seem to be worth overviewing.30

4.5.1 Neo-pragmatism The neo-pragmatic approach – rooted in the tradition of classical pragmatism developed by the American thinkers Charles S. Peirce, William James (1842–1910) and John Dewey (1859–1952) – is dominating the today’s philosophy of science, especially in American academia. According to Thomas J. Hickey (1930–2016): “Both successful science and contemporary philosophy of science are pragmatic. In science, as in life, realistic pragmatism is what works successfully”31. The aim of contemporary neo-pragmatic philosophy of science is to discover principles that explain successful

29 P. W. Humphreys, “Science, Philosophy of”, 1995. 30 W. J. Gonzalez, “Novelty and continuity in philosophy and methodology of science”, 2006. 31 T. J. Hickey, “Philosophy of Science: An Introduction”, 2016, p. 1.

4.5 Recent trends in philosophy of science

55

research practices in order to advance the development of (mainly basic) sciences. Those practices may be roughly characterised as follows: – The institutionalised aim of science is the development of laws and theories which satisfy empirical tests, and thereby can be used in scientific explanation32. – A discovery is the first step towards the aim of science since it is initiating the processes of constructing new theories: completely new or empirically more adequate than the existing ones33. – The evaluation of theories, aimed at their acceptance or rejection, is based exclusively on empirical criteria34. The contemporary neo-pragmatic philosophy of science is also rooted in the reflection over the language of quantum physics which led such physicists as Niels H. D. Bohr, Werner K. Heisenberg and David J. Bohm (1917–1992) to the conclusion that the mathematical equations of quantum theory must be viewed instrumentally rather than descriptively, and that they are merely useful linguistic instruments which enable correct predictions35.

4.5.2 Naturalism Naturalism is a philosophical stance referring to the assumption that nature is the only reality: there is nothing other than natural beings, natural events and natural phenomena. They are, at least in principle, completely knowable; so, all knowledge of nature may be subject to scientific investigation. Some naturalists deny the existence of supernatural entities while some others allow for existence of such entities, provided that knowledge about them may be acquired in an indirect way by investigation of natural objects being influenced by them. Naturalism presumes that there is a regularity and uniformity in nature that implies objective laws without which the pursuit of scientific knowledge would be irrational. According to the American philosopher Alexander Rosenberg (*1946), “progressivity” is a central feature of naturalism in philosophy of science36. Naturalism used to be very popular in the USA already in the first half of the twentieth century – it was then represented by such figures as Frederick J. E. Woodbridge (1867–1940), Morris R. Cohen (1880–1947), Ernest Nagel (1901–1985) and Sidney Hook (1902–1989). It appeared, however, in several versions:

32 ibid., pp. 18, 72, 78. 33 ibid., pp. 5, 18, 19. 34 ibid., pp. 20, 88, 106. 35 ibid., p. 12. 36 A. Rosenberg, “A Field Guide to Recent Species of Naturalism”, The British Journal for the Philosophy of Science, 1996, Vol. 47, p. 4.

56

4 Philosophy of science in historical perspective

– ontological naturalism, whose proponents accept only observable entities, and deny the legitimacy of unobservable ones, such as mind or consciousness; – epistemological naturalism, whose adherents hold that empirical information is essential to the normative function of epistemology, and that a priori ideals of reasoning fail to provide useful epistemic advice because they ignore constraints on human physical and cognitive capacities such as memory, attention or lifespan; – methodological naturalism, whose supporters believe that the progress of science (including the progress of social sciences) can be achieved through the use of procedures of empirical testing based on the criteria applied in natural sciences; – semantic naturalism, whose proponents accept meaning resulting from linguistic use because meaning is based on usage rather than on prescriptions.37 Elements of naturalism, supported by findings of cognitive science, appear in the works of such contemporary philosophers of science as Ronald Giere (*1938), Alvin Goldman (*1938) and Paul R. Thagard (*1950). Larry Laudan is the proponent of normative naturalism stating that both science and philosophy should follow the same methodological principles derived from (mainly historical) research practice. According to him: – Truth cannot be the aim of science because there is no epistemological criterion to determine whether we have reached it. – The epistemic values related to truth should be replaced with cognitive values which are used in the evaluation of scientific theories, such as the scope, generality, coherence, consilience or explanatory power. – Moreover, some social values should be taken into account since any human endeavour is grounded in social processes of communication and negotiation.38

4.5.3 Scientific realism The relationship of the human knowledge to reality has been for centuries a subject of fundamental disagreement among the philosophers. The related debates have good prospects to be continued in the future because there is no logical or empirical means to prove the exclusive validity of any epistemological stance in this respect. The main criterion of the diversification of stances is the realism-antirealism axis39. The most important version of antirealism is called instrumentalism (and will be

37 W. J. Gonzalez, “Novelty and continuity in philosophy and methodology of science”, 2006. 38 L. Laudan, “The Epistemic, the Cognitive, and the Social”, [in] Science, Values, and Objectivity (Eds. P. Machamer, G. Wolters), University of Pittsburgh Press, Pittsburgh 2004, pp. 14–23. 39 M. Liston, “Scientific Realism and Antirealism”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/sci-real/ [2018-07-20].

4.5 Recent trends in philosophy of science

57

discussed in Chapter 5), while realism appears most frequently in two principal versions: naïve realism and scientific realism. Naïve realism (also called common-sense realism) is the stance rooted in the idea that the senses provide us with direct awareness of nature as it really is. Naïve realism refers to the following set of beliefs: – There exists a world of material objects and phenomena which exist not only when they are being perceived. – These objects and phenomena retain their properties (such as size, shape, texture, smell, taste and colour) not only when they are being perceived. – Some statements about these objects and phenomena can be, therefore, known to be true through sense experience, which means that our claims to have knowledge of them are justified.40 Naïve realism is defended by some contemporary philosophers of mind and philosophers of science. Most prominent among them are the American philosopher of language John R. Searle (*1932), the South-African epistemologist John H. McDowell (*1942) and the British analytic philosopher of mind Galen J. Strawson (*1952). Naïve realism is rather uncommon among scientists. Much more frequently they recognise their adherence to scientific realism (also called representational realism) which states that the nature contains just those properties which appear in its scientific description – not the properties like size, shape, texture, smell, taste or colour. While a naïve realist claims, for example, that the experience of a sunset is the real sunset that we directly experience, a scientific realist says that our relation to reality is indirect, so the experience of a sunset is a subjective copy of what really is radiation as described by physics. Until the end of the nineteenth century, physics remained compatible with the naïve realism of everyday thinking. The situation changed with the advent of quantum mechanics: the physicists and philosophers started to realise that naïve realism, although useful at the level of direct observation, fails at the microscopic level; they could not find satisfactory reasons for ascribing objective existence to physical quantities as distinguished from their mathematical models identified and interpreted on the basis of experimental data. According to scientific realism: – We ought to regard scientific theories as true, approximately true or likely true because truth is the aim of science. – The language of science can express or carry out an objective content, where meaning and reality (the reference) have an identifiable nexus. – Scientific knowledge should be oriented towards truth, and so individual or social certainty is not sufficient for a realist approach.

40 W. J. Gonzalez, “Novelty and continuity in philosophy and methodology of science”, 2006.

58

4 Philosophy of science in historical perspective

– Among the criteria of scientific progress, those related to an improvement in the methods for searching truth are of particular importance. – The existence of the world is, in principle, independent of human cognition, but the knowledge about it could be intelligible for the scientists through research. – Cognitive values in science may have an objective status, and therefore are not reducible to a social construction.41 There are several kinds of scientific realism: semantic, logical, epistemological, methodological, ontological, axiological and ethical. It should be noted that a philosopher of science or a research practitioner can be realist in some aspects of science and antirealist in some others. Epistemological realism, being of particular significance for philosophy of science, has a number of variants: materialist realism as represented by the American philosopher Hartry H. Field (*1946), convergent realism as represented by the American philosopher Richard N. Boyd (*1942), internal realism as represented by the American philosopher and mathematician Hilary W. Putnam (1926–2016) or critical realism as represented by the Finnish philosopher Ilkka M. O. Niiniluoto (*1946). The latter variant refers to the following beliefs: – At least a part of reality is ontologically independent of human minds. – The concept of truth, understood as a semantical relation between language and reality, is applicable to all statements resulting from scientific enquiry, including scientific laws and theories. – Although truth is not easily attainable and even the best theories can turn out to be false, it is possible to approach it, and to make rational assessments of the related cognitive progress. – The best explanation for the practical success of science is the assumption that scientific knowledge is sufficiently close to the truth in the relevant aspects.42 The key argument in favour of scientific realism is its ability to explain the predictive success of scientific theories: this success would be a miracle if those theories were not at least approximately true. This argument is of abductive nature, i.e. based on the inversion of the following reasoning: if a theory is true, then all its empirical deductive consequences are also true43. This argument has been quite

41 ibid. 42 I. Niiniluoto, Critical Scientific Realism, Clarendon Press, Oxford 1999, p. 10; I. Niiniluoto, “Optimistic realism about scientific progress”, Synthese, 2017, Vol. 194, No. 9, pp. 3291–3309. 43 P. Lipton, “Inference to the Best Explanation”, [in] Companion to the Philosophy of Science (Ed. W.H. Newton-Smith), Blackwell, 2000, pp. 184–193.

4.5 Recent trends in philosophy of science

59

frequently repeated despite the criticism raised by some philosophers of science, e.g. by Bastiaan C. van Fraassen or Larry Laudan44.

4.5.4 Bayesianism James Clerk Maxwell wrote already in the second half of the nineteenth century: They say that Understanding ought to work by the rules of right reason. These rules are, or ought to be, contained in Logic; but the actual science of Logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore the true Logic for this world is the Calculus of Probabilities, which takes account of the magnitude of the probability (which is, or which ought to be in a reasonable man’s mind). This branch of Math., which is generally thought to favour gambling, dicing, and wagering, and therefore highly immoral, is the only “Mathematics for Practical Men”, as we ought to be.45

This was a very “practical” conclusion drawn from the experience of classical science, including its failures and unresolved problems. The eliminative induction, proposed by Francis Bacon in the seventeenth century, was based on an assumption that it is possible to generate an almost exhaustive set of hypotheses concerning some state of affairs, and next reduce it to a single hypothesis by progressive elimination of its rivals on the basis of new evidence resulting from observation or experiments. It was an early attempt to transform unreliable inductive inference into something as reliable as deductive reasoning. It turned out, however, that even the richest (but finite) evidence is not sufficient to exhaustively justify a universal hypothesis. However, as a rule, it may in some way contribute to discrimination of rival hypothesis. Evidence, therefore, at least partially justifies the choice of the best hypothesis: the latter can be never fully justified, but it may be more or less confirmed. This was a motivation behind early attempts of logical positivists to develop probabilistic methodologies for confirmation of hypotheses, using three interpretations of probability: frequentist, logical and subjectivist46. Both a methodology based on the frequentist interpretation, proposed by Hans Reichenbach47,

44 I. Niiniluoto, “Peirce, Abduction and Scientific Realism”, [in] Ideas in Action: Proceedings of the Applying Peirce Conference, Nordic Studies in Pragmatism 1 (Eds. M. Bergman, S. Paavola, A.-V. Pietarinen, H. Rydenfelt), Nordic Pragmatism Network, Helsinki 2010, pp. 252–263. 45 L. Campbell, W. Garnett, The Life of James Clerk Maxwell: With a Selection from His Correspondence and Occasional Writings and a Sketch of His Contributions to Science, MacMillan & Co., London 1882, p. 143. 46 A. Hâjek, “A Philosopher’s Guide to Probability”, Manuscript published on the internet, 2008, https://openresearch-repository.anu.edu.au/handle/1885/32752 [2017-01-09]. 47 H. Reichenbach, Wahrscheinlichkeitslehre. Eine Untersuchung über die Logischen und Mathematischen Grundlagen der Wahrscheinlichkeitsrechnung, Uitgeverij Sijthoff, Leiden 1935.

60

4 Philosophy of science in historical perspective

and a methodology based on logical interpretation, proposed by Rudolf Carnap48, failed already in the first half of the twentieth century. The subjectivist interpretation of probability, underlying the so-called Bayesianism, although attracted attention of the philosophers of science in the 1960s and 1970s, started to be particularly influential only after appearance of the book Scientific Reasoning: The Bayesian Approach in 198949. Bayesianism is rooted in the observation that the degree of a confirmation may be measured by the conditional probability with respect to a given evidence. Using the concept of conditional probability PrðHjEÞ, one can define the relationship between statements (sentences or ideas) – H expressing a hypothesis and E carrying evidence – which is a generalisation of the usual implication. Within this framework: – The equality PrðHjEÞ = 1 means: if there is E, then PrðH Þ = 1, i.e. H is certain, and therefore true. Thus, PrðHjEÞ = 1 is equivalent to the implication E ) H. – The equality PrðHjEÞ = 0 means: if there is E, then PrðH Þ = 0, i.e. H is impossible,  and therefore false. Thus, PrðHjEÞ = 0 is equivalent to the implication E ) H,  where H stands for “not H”. – An intermediate value of the conditional probability, 0 < PrðHjEÞ < 1, means that the piece of evidence E “partially” implies the hypothesis H, i.e. the veracity (or validity) of E does not determine the logical value of H, but makes the latter more probable. It seems that, using the concept of conditional probability, one can properly characterise the degree of rational conviction about the veracity of a hypothesis under test. In the absence of comprehensive evidence, researchers can never be sure of their hypotheses. The pieces of evidence resulting from observation or experiments enable them to discriminate hypotheses, i.e. to be more or less convinced about their veracity. The degree of the researchers’ conviction may be best measured by the tendency to act on the basis of the tested hypothesis: to accept bets on its validity50.

48 cf. a summary in: R. Carnap, Logical Foundations of Probability, University of Chicago Press, Chicago 1950. 49 C. Howson, P. Urbach, Scientific Reasoning: The Bayesian Approach, Open Court Pub., Chicago – La Salle 1989. 50 cf. explication of this concept in S. Hartmann, J. Sprenger, “Bayesian epistemology”, Manuscript published on the internet, November 29, 2016, http://www.stephanhartmann.org/wp-content/up loads/2014/07/HartmannSprenger_BayesEpis.pdf [2017-01-10].

4.5 Recent trends in philosophy of science

61

4.5.5 Specialisation The collapse of the positivists’ domination in philosophy of science entailed its splintering into a number of subfields; still those who continue to hold that there are general principles underlying various scientific methods are active, but those for whom only local, context-specific approaches are feasible seem to prevail51. The positivists’ orientation on the reduction of all sciences to physics has been replaced with a conviction that, at least in practice, this reduction cannot be achieved. There is now a specialised version of philosophy of science for almost every scientific discipline, e.g. the philosophy of biology or the philosophy of medical science – oriented on problems and methods specific to this discipline. This trend has been accompanied by a diminished emphasis on grand unifying theories in favour of local models that capture, albeit imperfectly, the structure of specific subject-matter of research52.

51 P. W. Humphreys, “Science, Philosophy of”, 1995. 52 ibid.

5 Mathematical modelling 5.1 Models in science The noun model is derived from the Latin modulus being the diminutive form of modus (which means “measure” or “manner”). The following list of definitions, compiled on the basis of online dictionaries of English language, shows the diversity of meanings attributed to this noun: – a standard or example for imitation or comparison; – a representation, frequently in miniature, intended to show the construction or appearance of something; – a thing which accurately resembles or represents something else, especially on a small scale; – a description or analogy used to help visualise something (such as an atom) that cannot be directly observed; – a conceptual or mental representation of something; – a graphical, symbolic or verbal representation of a concept, phenomenon, relationship, structure, system or an aspect of the real world; – a computer representation or scientific description of something; – an image in clay, wax or the like to be reproduced in more durable material; – a person, or a work, that is proposed or adopted for imitation; – a person or thing that serves as a subject for an artist (e.g. sculptor, painter or photographer); – a person employed to display clothes by wearing them; – a particular design or version of a product, especially a car; – a simplified representation of a system or phenomenon, as in the sciences or economics, with any hypotheses required to describe the system or explain the phenomenon, often mathematically; – a system of postulates, data and inferences presented as a mathematical description of an entity or state of affairs; – a set of ideas and numbers that describe the past, present or future state of something; – a system that is being used and that people might want to copy in order to achieve similar results; – a hypothetical description of a complex entity or process; and – an animal or plant to which another bears a mimetic resemblance. It follows from these definitions that both material and immaterial (abstract) entities may be modelled by means of both material and immaterial (abstract) entities. If empirical sciences are concerned, modelling of material entities by means of both material and immaterial (abstract) entities is of primary importance. According to Kara Rogers, https://doi.org/10.1515/9783110584066-005

64

5 Mathematical modelling

scientific modelling is “the generation of a physical, conceptual, or mathematical representation of a real phenomenon that is difficult to observe directly”, and scientific models “are used to explain and predict the behaviour of real objects or systems [. . .] in a variety of scientific disciplines, ranging from physics and chemistry to ecology and the Earth sciences”1. The minimum pragmatic expectation with respect to a model is that the conclusions, drawn from some mental or physical operations performed on it, may be effectively applied to the modelled object or phenomenon. Example 5.1: The three-dimensional double-helix model of DNA is used primarily to visualise it. Predictive models, identified on the basis of historical data, are used for warning against earthquakes, tsunamis, epidemics and other large-scale disasters. The wave model and the particle model of light are two complementary models of the same phenomenon studied in quantum mechanics. The models of atmospheric and ocean phenomena, developed by the Earth sciences, are used for weather forecasting, and recently – also for understanding human-induced and non-human-induced climate changes. In ecology, modelling can be used for understanding dynamics of interactions between live organisms. Scientific modelling also has applications in urban planning and restoration of ecosystems.2

The common denominator of all recalled definitions is an explicit or implicit similarity relationship between an entity called model and an entity (or a class of entities) being modelled. In mathematics, the idea of similarity is instantiated by the concept of homomorphism. This is a relation-preserving mapping of two relational systems:     h : X; r1X , r2X , ... ! Y; r1Y , r2Y , ... where X is a set of elements x1 , x2 , ..., and r1X , r2X , ... are relations defined on those elements; Y is a set of elements y1 , y2 , ..., and r1Y , r2Y , ... are relations defined on those elements (cf. Figure 5.1). The mapping h should meet the following requirements: – the corresponding relations riX and riY (i = 1, 2, ...) have the same numbers of arguments; – if some elements of the set X are bound by the relation riX , then their images in the set Y are bound by the relation riY . Some authors postulate to use the concept of homomorphism (or even isomorphism3) as the basis for a general definition of the concept of model. Although this may work perfectly for abstract models of abstract objects, it seems to be logically problematic for

1 K. Rogers, “Scientific modeling”, [in] Encyclopædia Britannica, Encyclopædia Britannica, Inc., Chicago 2017, https://www.britannica.com/science/scientific-modeling [2017-06-01]. 2 based on: ibid. 3 cf. R. W. Batterman, “On the Explanatory Role of Mathematics in Empirical Science”, The British Journal for the Philosophy of Science, 2010, Vol. 61, pp. 1–25.; W. Bechtel, A. Abrahamsen, “Explanation: A Mechanistic Alternative”, Studies in History and Philosophy of the Biological and Biomedical Sciences, 2005, Vol. 36, pp. 421–441.; A. Bokulich, “How scientific models can explain”, Synthese, 2011, Vol. 180, No. 1, pp. 33–45.

5.1 Models in science

65

Figure 5.1: Graphical illustration of the concept of homomorphism.

other types of models, in particular, for abstract models of material objects. The reason is that no satisfactory definition for any mapping of material systems in abstract systems may be provided, and the relationship between them is a subject of fundamental controversy among the philosophers. Under such circumstances, the concept of homomorphism should remain only a conceptual aid in the understanding of modelling, and the concepts of model and modelling should be defined by the contexts of their use, considered to be correct, rather than by the closed-form specification of the necessary and sufficient conditions which should be satisfied by their referents. If the operator h is invertible, i.e. h− 1 is also a homomorphism, then h is an   isomorphism, and the properties of the system X; r1X , r2X , ... may be inferred from   the properties of the system Y; r1Y , r2Y , ... by means of deductive reasoning. Otherwise, the inference must be based on abductive reasoning, and its results are ambiguous or uncertain. In the computer age, mathematical models seem to be of primary importance in all scientific disciplines, but many other types of models have been used and are still used in research practice, e.g. physical models of physical objects and biological models of biological objects. Example 5.2: The human organism is frequently modelled with an animal organism, selected according to its genetic, anatomical or physiological similarity to the human organism. The use of such models in research practice may be motivated not only by ethical reasons (cf. Subsection 16.2.1) but also by pragmatic considerations, e.g. by the possibility to shorten the research cycle. For example, the use of a mouse may shorten this cycle by the factor of ca. 50.4

Mathematical modelling of an object of interest is, roughly speaking, an operation (or a sequence of operations) aimed at construction of a mathematical structure, such as

4 N. G. Skene, M. Roy, S. G. N. Grant, “A genomic lifespan program that reorganises the young adult brain is targeted in schizophrenia”, eLife, 2017, Vol. 6, No. e17915, pp. 1–30.

66

5 Mathematical modelling

a variable or equation, which in some way represents that object. Because of the fundamental epistemic uncertainty about the relationship between reality and ideas assumed to describe it, there is no agreement among researchers and philosophers about what does it mean “to represent” a material object by means of a mathematical structure. However, regardless of the stance in this respect, they rather agree about the key role of mathematical models in science. The Hungarian-American mathematician, computer scientist and polymath John von Neumann (1903–1957) – who directly contributed to the development of mathematical models in quantum physics, computer science, fluid dynamics, game theory and economics – emphasised this role of models in science by saying: “The sciences do not try to explain, they hardly even try to interpret, they mainly make models”5. He understood a mathematical model as “a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena”6. Due to the primordiality of this concept and the lack of its generally accepted intensional definition, it seems to be reasonable to assume that it is a primary concept which cannot be formally defined, although it may be up to certain extent explained by its relations to other fundamental concepts of philosophical and scientific language, as well as by examples of its application.

5.2 Methodology of mathematical modelling The term mathematical modelling will be used hereinafter in reference to material objects only since this type of modelling is of primary importance for technoscience. In this case, the mathematical modelling sensu stricto is preceded by semantic modelling (cf. Figure 5.2), also called verbal modelling, which consists in describing a target material entity, called hereinafter system under modelling (SuMo) by means of meaningful statements of a natural language. The Austrian philosopher Ludwig J. J. Wittgenstein expressed this idea in a very concise way: “The proposition is a model of the reality as we think it is”7. A natural language is a primary tool for producing semantic models. It may be enhanced with special technoscientific terminology – including acronyms (such as “DNA”), metaphors8 (such as “black hole”) and symbols – but it is always preserving its natural flexibility, ambiguity and vagueness. The latter properties of a natural language, extensively exploited by poets, are usually perceived as its weakness by scientists who try to mitigate them during the transformation of a semantic model into a mathematical model. It should be, however, noted that certain conceptual aspects of the semantic

5 J. von Neumann, “Method in the Physical Sciences”, [in] The Unity of Knowledge (Ed. L. G. Leary), Doubleday & Co., New York 1955, pp. 157–164. 6 ibid. 7 L. Wittgenstein, Tractatus Logico-philosophicus, Kegan Paul Pub., London 1922, § 4.01. 8 C. A. Scharf, “In Defense of Metaphors in Science Writing”, Scientific American, July 9, 2013.

5.2 Methodology of mathematical modelling

67

Figure 5.2: General scheme of mathematical modelling.

model may be lost if this transformation is carried out prematurely. The formulation of a natural-language description of the modelled object consists in abstraction, i.e. the removal, in thought, of some characteristics or features or properties of the SuMo that are not relevant to the aspects of its behaviour under study. It is worth being noted that abstraction is an important operation not only in the identification of models but also in the formation of general concepts out of individual instances9. The sensu stricto mathematical modelling of a physical object (phenomenon) consists in “homomorphic” mapping of those significant features and relationships in a selected (abstract) relational system. Let the SuMo be an object, or a phenomenon, or an event, or a process etc. of physical, or chemical, or biological, or psychological, or sociological, or economical, or mixed nature. Let a mathematical structure – composed of entities such as numbers, variables, sets, equations, functions and operators – be used for modelling the SuMo with the purpose to infer about its properties or behaviour under various conditions. The identification of the SuMo itself, i.e. the distinction between this system and the surrounding is not objectively given, but is already part of the modelling process. Most frequently the boundaries of the SuMo are determined by abrupt changes of mass, energy or information density. The existence of those boundaries does not mean that there is no interaction between the SuMo and its surrounding. On the contrary, an exchange of mass, energy or information is going on, e.g. the energy of thermal radiation or of electric current is delivered, row materials are supplied and industrial products are taken away. The boundary points or areas where the exchange is taking place are called ports. The flow of energy or mass through those ports is characterised by means of quantities, such as flux of

9 S. Psillos, Philosophy of Science A–Z, Edinburgh University Press, Edinburgh (UK) 2007, p. 6.

68

5 Mathematical modelling

energy, flux of mass, flux of volume or density of energy, field of flow velocity or electric field strength. The ports are often described using a pair of associated quantities (such as force and velocity or current and voltage) whose product characterises power flowing through a port. On the whole, the quantities describing ports are varying in time and space. The next step of semantic modelling of SuMo is aimed at distinguishing two categories of quantities describing ports: – input quantities, as a rule identified with the causes of phenomena, events or processes in the SuMo; – output quantities, identified with the manifestations of phenomena, events or processes in the SuMo. The input quantities are next subdivided into desirable and undesirable input quantities; among the latter ones, controllable input quantities (called influence quantities) and uncontrollable input quantities (called disturbances) are distinguished. The disturbances are most frequently assigned to the outputs although they may appear in various points of the SuMo. This type of semantic modelling is not covering all possible methodologies met in the current research practice, but it is applicable to the bulk majority of mathematical-modelling problems which appear in technoscience. It is an adequate tool for modelling both causal- and correlation-type relationships. Example 5.3: The network of roads of Poland may be modelled by a graph whose nodes represent towns and villages, and are numbered 1, 2, . . ., while edges are associated with the variables lði, jÞ modelling distance between those towns and villages. If no more properties are associated with the nodes or edges of this graph, the model cannot be considered as an input-output model (this observation applies to any road atlas being a graphical model of an area on the globe or a mathematical model after digitalisation). If, however, the border nodes are associated with the flux of cars crossing the borders, one may consider the border nodes as ports, and analyse the statistics of cars crossing the border. Further enhancement of this model can include the time dependence of that flux, diversification of cars according to the country of origin, manufacturer, power etc.

The semantic model of the SuMo, being its description in terms of its features which are considered to be important for the given application, is next translated into a more formal language of quantities which are idealised features of the system, obtained by means of abstraction; here, the definition of the sensu stricto mathematical model begins. It is an iterative procedure comprising two fundamental operations as shown in Figure 5.3: – structural identification, i.e. the selection of a mathematical structure for the model (most frequently, a type of function or equation, e.g. a linear algebraic equation); – parametric identification, i.e. the estimation of the model parameters (e.g. the coefficients of that equation).

5.2 Methodology of mathematical modelling

69

Figure 5.3: Procedure of mathematical modelling.

The first operation can hardly be algorithmised: the choice of the model structure is usually based on some intuitive premises, anterior experience and trial-and-error steps. On the other hand, the second operation is the subject of advanced algorithmisation. By principle, a model provides only an approximate description of the properties and behaviour of the SuMo. Both model structure and parameters are affected by the trade-off between simplicity and informativeness. The model is subject to structural inadequacies resulting from the limitations of the available knowledge on the SuMo, from neglecting some factors during the selection of the quantities (input, output and influence quantities) modelling the SuMo, or from the inappropriate specification of such quantities, or from the inappropriate choice of the equations modelling the relationships among those quantities. It is also subject to inaccuracies in the parameter estimates due to limited accuracy of the parameter identification method, or errors of its technical implementation, or errors in the data used for identification. The assessment of such inadequacies and inaccuracies is highly problematic in practice, as they may be estimated only by means of an extended model of the SuMo, i.e. a model that is structurally richer or more exact in its parameter values than the model under consideration. Some other strategies for model

70

5 Mathematical modelling

assessment can also be envisaged (in particular, the strategies based on sameness of results obtained from empirically independent models and effectiveness of model-based decision processes10), but all of them take into account that the model quality must be assessed in relation to an external and independent reference. It is possible that even in the most trivial situations, a mathematical model does not carry full information on the SuMo. Therefore, a general criterion for evaluation of the quality of a mathematical model should refer to the trade-off between simplicity (plausibly inversely related to modelling costs) and informativeness: a mathematical model should be as simple as possible, but sufficiently informative for its target application. Example 5.4: The formula u = Ri – put forward by Georg S. Ohm in 1827, and therefore called Ohm’s law – is the simplest mathematical model of a resistor depicted in Figure 5.4. It is sufficient for the analysis of static or low-frequency behaviour of the resistor, provided the latter is a high-precision device, and its “behaviour” is understood as the change of the electrical voltage u in response to the change of the current i. The symbol R denotes the resistance being the only parameter of this model.

1 kΩ Figure 5.4: Resistor being the object of mathematical modelling.

For the analysis of static or low-frequency behaviour of a low-precision resistor, an algebraic nonlinear model may turn out to be necessary. For the analysis of its high-frequency behaviour, a model having the form of a system of nonlinear ordinary differential equations should be used. A model reflecting only electrical phenomena in a resistor may be sufficient for the functional design of an electrical filter, but not sufficient for its manufacturing design since thermal phenomena and geometrical features should be taken into account at this stage of its development. The manufacturing design of the filter may also require randomisation of some parameters of the model when a kind of tolerance of the designed filter with respect to scattering of manufacturing conditions is expected.

The orientation of the model identification on a target application should be a guiding principle of the choice of an adequate structure of the model and the choice of the criteria for estimation of its parameters and for evaluation of its performance. If, for example, the model is intended for simulation-type applications, then its

10 L. Mari, V. Lazzarotti, R. Manzini, “Measurement in soft systems: epistemological framework and a case study”, Measurement, 2009, Vol. 42, pp. 241–253.

5.3 Typology and examples of mathematical models

71

accuracy is more important than its complexity; if it is intended for explorationtype applications, then its simplicity will have a priority11. In the case of inputoutput models, the choice of criteria, used in parametric identification and assessment of the model, depends on whether the model is intended for solving forward problems (determination of the outputs on the basis of data representative of the inputs) or for solving inverse problems (determination of the inputs on the basis of data representative of the outputs). In the first case, the priority should be given to criteria defined in the domain of output quantities; in the second case – to criteria defined in the domain of input quantities12.

5.3 Typology and examples of mathematical models A mathematical model is a set of variables (with the domains of their variability) and a set of relations (mappings or equations) linking those variables. Consequently, the mathematical structures, used for mathematical modelling, may be classified according to the typology of variables and relations. The variables include scalars, vectors, matrices, etc., which may be ordinary, random and, fuzzy, etc. An ordinary variable x is defined if its domain X, i.e. a set of values it may assume, is specified; this can be a subset of a Cartesian product of the sets of integer, real and complex numbers. The relations include mappings (functions, functionals, operators, etc.) and equations, e.g. logical equations, algebraic equations, ordinary differential equations, partial differential equations or mixed equations. Depending on the origin, structure and quantity of information used for modelling, the input-output models may be of black-box type, white-box type or grey-box type. A black-box model is representing only the relationships among the quantities associated with the ports of the SuMo, and is identified on the basis of (measurement) data representative of those quantities. A white-box model is representing, moreover, the internal structure of the SuMo, and is identified under an assumption of the availability of the data representative of both the quantities associated with the ports and of the quantities characterising the elements the SuMo is composed of. A grey-box model differs from white-box models in the limited representation and measurement attainability of internal

11 A. B. Murray, “Contrasting the Goals, Strategies, and Predictions Associated with Simplified Numerical Models and Detailed Simulations”, Geophysical Monograph “Prediction in Geomorphology”, 2003, Vol. 135, pp. 1–15. 12 cf. R. Z. Morawski, A. Podgórski, “Choosing the Criterion for Dynamic Calibration of a Measuring System”, Proc. 5th IMEKO TC4 Int. Symposium (Vienna, Austria, April 8–10, 1992), pp. III.43–50.

72

5 Mathematical modelling

quantities. Due to the transitiveness of the concept of system, those definitions should be considered with certain reservation: any system is a set of elements interconnected in such a way as to perform a common function13; but each element may be considered as a system of lower-level elements. Consequently, what is considered white-box modelling at a certain level of system organisation may be viewed as the instance of grey-box modelling at a lower level. For this reason, a more practical definition of a white-box model would be that limiting the methods of its identification to those which do not require ad hoc measurement data, i.e. to those which entirely rely on the use of scientific laws and theories (and indirectly to measurement data which were used in the past for their confirmation). Example 5.5: Living cells break down glucose to produce carbon dioxide and water in a complex process called glycolysis that involves several enzyme-catalysed reactions. During each of such reactions, an enzyme (E) catalyses the conversion of a substrate (S) to form a product (P) via the formation of an intermediate complex (ES). According to the law of mass action, the rate of a chemical reaction is directly proportional to the product of the activities or concentrations of the reactants. Using this law, one may compile the following mathematical model of the enzyme-catalysed reaction under consideration: d½S = − k1 ½S½E  + k −1 ½ES dt d ½E  = − k1 ½S½E  + k −1 ½ES + k2 ½ES dt d½ES = k1 ½S½E  − k −1 ½ES − k2 ½ES dt d½P = k2 ½ES dt where t is time; ½S, ½E , ½ES and ½P are concentrations of S, E, ES and P, respectively; k1 , k −1 and k2 are affinity constants.14

Ordinary differential equations are adequate mathematical structures for modelling relationships among functions in a variable which most often (but not exclusively) is modelling time. Partial differential equations are most frequently used for modelling relationships among functions in at least two variables, e.g. the variables modelling time and a space dimension.

13 from Greek systema = “organised whole” or “whole compounded of parts”. 14 S. M. Dunn, A. Constantinides, P. V. Moghe, Numerical Methods in Biomedical Engineering, Elsevier Academic Press, Burlington – San Diego – London 2006, pp. 229–232.

5.3 Typology and examples of mathematical models

73

Example 5.6: Here are two examples of using partial differential equation for white-box modelling of sensors: – In the case of a resistive sensor of oxygen concentration, the response is modelled by means of a linear first-order partial differential equation, nonlinear with respect to a variable modelling particle positions15. – In the case of a spectrophotometric sensor for measuring the concentration of HCl vapours, the absorbance response is modelled by means of four linear second-order partial differential equations that describe the dynamic behaviour of that concentration and of three other concentrations of ions involved in the sensing process16.

The corresponding practical definition of a black-box model would be that limiting the methods of its identification to those that do not require any reference to scientific laws and theories, i.e., entirely rely on the use of ad hoc measurement data. The design and functioning of control systems for complex industrial objects is based on their mathematical models. Most frequently, these are black-box models which are identified using measurement data acquired during the normal functioning of those objects. A broad variety of control problems may be solved using differential or integral equations modelling the relationships between controllable input quantities and measurable output quantities. Because of practical importance of this problem, there is abundant literature devoted to the methodology of this kind of mathematical modelling17. The scope of applications of this methodology is today much broader than control engineering; it includes, in particular, modelling of dynamical processes in physiology, biochemistry, biophysics and electronics. The practical definition of a grey-box model, consistent with the definitions of white-box and black-box models, should combine the strong points of both types of models, i.e., emphasise the use of scientific laws and theories with ad hoc measurement data. Various combinations are possible, but the use of laws and theories for structural identification and ad hoc data for parametric identification seems to be quite straightforward.

15 N. Izu, W. Shin, N. Murayama, “Numerical analysis of response time for resistive oxygen gas sensor”, Sensors and Actuators B: Chemical, 2002, Vol. B87, No. 1, pp. 99–104. 16 J. A. Morales, J. F. Cassidy, “Model of the response of an optical sensor based on absorbance measurement to HCl”, Sensors and Actuators B: Chemical, 2003, Vol. B 92, No. 3, pp. 345–350. 17 e.g. the books: S. A. Billings, Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-temporal Domains, Wiley & Sons, Chichester (UK) 2013; Y. Boutalis, D. Theodoridis, T. Kottas, M. A. Christodoulou, System Identification and Adaptive Control, Springer, Cham (Switzerland) 2014; W. Greblicki, M. Pawlak, Nonparametric System Identification, Cambridge University Press, Cambridge (UK) 2008; R. Pintelon, J. Schoukens, System Identification: A Frequency Domain Approach, Wiley & Sons, Hoboken (USA) 1991; O. Nelles, Nonlinear System Identification: From Classical Approaches to Neural Networks and Fuzzy Models, Springer, Berlin – Heidelberg 2001; K. J. Keesman, System Identification: An Introduction, Springer, London 2011.

74

5 Mathematical modelling

Example 5.7: A thermistor is a kind of resistor whose resistance R is significantly dependent on temperature T ; in the case of the so-called NTC (negative temperature coefficient) thermistor, it decreases as temperature rises. The Steinhart–Hart equation is a widely used theoretical model of that dependence: n o p0 + p1 lnðRÞ + p3 ðlnðRÞÞ3 = T −1 or R = argr p0 + p1 lnðr Þ + p3 ðlnðr ÞÞ3 = T −1 where p0 , p1 and p3 are constant parameters. Their values may be determined theoretically, but better accuracy of approximation is attained if they are estimated on the basis of the measurement data acquired for each device. The ordinary least-squares estimator may be used for this purpose. Alternatively, the dependence of R on T may be approximated by an algebraic polynomial, but it will be impossible to attain comparable accuracy regardless of the order of the polynomial.

The grey-box model of a SuMo may result from mixing white-box models with blackbox models of its elements. An example of such an approach, when applied to a spectrophotometric transducer, may be found in an author’s article published in 199918. Among all the mathematical models which appear in technoscience, the spatiotemporal models play a dominant role because dynamic processes are of particular cognitive importance. In this context, the following black-box model structure, called Volterra series, is worth being mentioned: yðtÞ = g0 ðtÞ +

+ð∞ K X k=1 −∞

+ð∞

gk ðτ1 , ..., τk Þ

...

k Y

xðt − τκ Þdτκ

κ=1

−∞

where t is a real-valued variable modelling time, xðtÞ is the scalar function modelling the input of a time-invariant SuMo and yðtÞ is the scalar function modelling the output of this system. The functions gk ðτ1 , ..., τk Þ for k = 1, ..., K are called Volterra kernels and play the role of generalised parameters of the model to be estimated during its parametric identification. For K = 1, this series takes on the form: +ð∞

yðtÞ = g0 ðtÞ +

gðτÞxðt − τÞdτ −∞

which is an adequate model of various dynamic processes being of interest for many chapters of technoscience. If gðtÞ = 0 and xðtÞ = 0 for t < 0, then this convolution-type equation takes on the form: ðt yðtÞ = g0 ðtÞ +

gðτÞxðt − τÞdτ 0

which is an adequate model of causal relationships between xðtÞ and yðtÞ.

18 M. P. Wisniewski, R. Z. Morawski, A. Barwicz, “Modeling the Spectrometric Microtransducer”, IEEE Transactions on Instrumentation and Measurement, 1999, Vol. 48, No. 3, pp. 747–752.

5.4 Computational modelling

75

Example 5.8: Let’s consider a low-pass electrical filter whose graphical model is shown in Figure 5.5. It is composed of a resistor (whose resistance is R) and a capacitor (whose capacitance is C). Its white-box mathematical model, representing physical relationships among the currents and voltages, may be compiled on the basis of the following premises: – According to the Ohm’s law, the voltage on the resistor uR ðt Þ is proportional to the current iR ðt Þ flowing through it: uR ðt Þ = R  iR ðt Þ, where t is a scalar real-valued variable modelling time. – According to the definition of capacitance, the change in voltage on the capacitor uC ðt Þ is proportional to the change in electrical charge in it; hence, the current in the capacitor iC ðt Þ is proportional to the derivative u′C ðt Þ of uC ðt Þ with respect to the variable t: iC ðt Þ = C  u′C ðt Þ. – According to the Kirchhoff’s current law, the algebraic sum of currents flowing into a node is equal to zero. – According to the Kirchhoff’s voltage law, the algebraic sum of voltage in a loop is zero.

R

u1 (t)

C

u2 (t) Figure 5.5: Low-pass electrical filter.

After elimination of all variables except u1 ðt Þ and u2 ðt Þ, the mathematical model of the filter takes on the form of an ordinary differential equation: Tu′2 ðt Þ = − u2 ðt Þ + u1 ðt Þ with T ≡ RC The explicit solution of this equation with respect to the filter response u2 ðt Þ for a given u1 ðt Þ has the form: u2 ðt Þ = u2 ð0Þe − ðt=T Þ +

ðt 1 u1 ðτ Þe − ððt − τ Þ=T Þ dτ for t > 0 T 0

where u2 ð0Þ is the initial value of u2 ðt Þ, which is assumed to be known. Thus, the generalised parameters of the equivalent black-box model may be computed according to the formulae: g0 ðt Þ ≡ u2 ð0Þe − ðt=T Þ and gðt Þ ≡

1 − ðt=T Þ for t > 0 e T

5.4 Computational modelling A computer may be used as a general-purpose carrier of any abstract model, i.e., conceptual, linguistic, mathematical or mixed model. The key advantage of such an arrangement consists in the possibility to use the same computer for various manipulations over that model: for its analyses and transformations, as well as

76

5 Mathematical modelling

for deductive, inductive and abductive reasoning. What makes, however, a qualitative difference in comparison to traditional mode of using mathematical models is the possibility to enable them to “learn” from experience. This is a major topic of the field of study called machine learning (ML), being part of an interdiscipline called artificial intelligence. According to the American computer scientist Tom M. Mitchell (*1951), this field of study may be best defined by the central question it is focused on, viz.: “How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?”19. This question refers to a broad range of learning tasks, including: – the design of autonomous mobile robots that learn to navigate from their own experience; – the exploration of historical medical records, aimed at learning which future patients will respond best to which treatments.20 An ML problem can be defined as the problem of improving some measure of performance when executing some task, through some training experience of a predefined type21. Depending on how the task, the measure of performance and the type of experience are specified, the learning problem may be called by names such as data mining, autonomous discovery, database updating or programming by example. The ML-related research has been broadly inspired by the progress in psychology, sociology and neurosciences, and vice versa: its outcomes stimulate the developments in those domains. The ML applications include robot control, speech recognition, computer vision, biosurveillance, medical diagnostics, criminal investigation etc. For this book, however, the acceleration of empirical studies is of primary importance. Today, many research projects, critically dependent on massive empirical data, make use of ML methods to intensify the process of scientific discovery; those methods are also being used for developing models of gene expression in the cell from high-throughput data, for discovering unusual astronomical objects from massive data collected within the Sloan sky survey22 and for characterising the

19 T. M. Mitchell, “The Discipline of Machine Learning”, Tom Mitchell’s Website, 2006, http:// www.cs.cmu.edu/~tom/pubs/MachineLearning.pdf [2017-06-02]. 20 ibid. 21 M. I. Jordan, T. M. Mitchell, “Machine learning: Trends, perspectives, and prospects”, Science, 2015, Vol. 349, No. 6245, pp. 255–260. 22 cf. the website of Sloan Digital Sky Surveys, located at http://www.sdss.org/surveys/ [2017-06-04].

5.4 Computational modelling

77

complex patterns of human brain activation correlated with different cognitive states of individuals subject to functional magnetic resonance image (fMRI) scanning23. Among the ML tools, the algorithms for supervised learning of classification (e.g. Bayesian classifiers or artificial neural networks) and smoothing approximation (e.g. support vector machines or artificial neural networks) are probably most widely applied in various fields of technoscience. Example 5.9: In a series of articles devoted to monitoring of elderly and disabled persons, the following ML tools have been used for fall detection: neural networks, decision trees, support vector machines, naïve Bayes classifiers and deep learning classifiers.24

Artificial neural networks are universal approximators whose parameters may be adjusted on the basis of empirical data carrying information on the approximated operator. As a rule, two sets of such data are deployed for identification of the model: the training set, which is used at the stage of parametric identification of the model, and the validation set, which is used at the stage of assessment of the model (cf. Figure 5.3). Artificial neural networks may be used both for modelling a static SuMo (whose traditional mathematical models have the form of algebraic equations) and a dynamic SuMo (whose traditional mathematical models have the form of more complex structures, such as integral or differential operators). The trained network is providing an approximation of the response of the SuMo to any input belonging to the class of inputs used for its training.

23 T. M. Mitchell, “The Discipline of Machine Learning”, 2006. 24 S. Jankowski, Z. Szymański, U. Dziomin, P. Mazurek, J. Wagner, “Deep Learning Classifier for Fall Detection Based on IR Distance Sensor Data”, Proc. IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (Warsaw, Poland, Sep. 24–26, 2015), pp. 723–727; S. Jankowski, Z. Szymański, P. Mazurek, J. Wagner, “Neural Network Classifier for Fall Detection Improved by Gram-Schmidt Variable Selection”, Proc. IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (Warsaw, Poland, Sep. 24–26, 2015), pp. 728–732; P. Mazurek, A. Miękina, R. Z. Morawski, “Comparative study of three algorithms for estimation of echo parameters in UWB radar module for monitoring of human movements”, Measurement, 2016, Vol. 88, pp. 45–57; J. Wagner, P. Mazurek, A. Miekina, R. Z. Morawski, F. F. Jacobsen, T. Therkildsen Sudmann, I. Træland Børsheim, K. Øvsthus, T. Ciamulski, “Comparison of two techniques for monitoring of human movements”, Measurement, 2017, Vol. 111, pp. 420–431; P. Mazurek, J. Wagner, R. Z. Morawski, “Use of kinematic and mel-cepstrum-related features for fall detection based on data from infrared depth sensors”, Biomedical Signal Processing and Control, 2018, Vol. 40, pp. 102–110.

78

5 Mathematical modelling

Example 5.10: A simple RC filter from Example 5.8 has been modelled by means of a NARX Feedback Neural Network, available in MATLAB25. A network – with 22 hidden neurons, a single output neuron and the default delay – has been designed for this purpose. In Figure 5.6, the results of two numerical experiments are presented. The training data used in experiment no. 1 have turned out to be not rich enough (in terms of the frequency spectrum and morphological details in the time domain) to enable the network to provide a correct response u2 ðt Þ to a relatively simple (composed of two triangles) input voltage u1 ðt Þ. The training data used in experiment no. 2 have turned out to be sufficient for this purpose. TRAINING 2.5

1.5 1 0.5 0

0

10

20

30

40

50

Time [T]

(a) 3

60

0

5

10

3

Voltage [V]

0 –1

20

Reference input Reference output Estimated output

2.5

1

15

Time [T]

(b) Reference input Reference output

2

Voltage [V]

1

0

–1

EXPERIMENT #2

2 1.5

0.5

–0.5

2 1.5 1 0.5

–2

0

–3 0 (c)

Reference input Reference output Estimated output

2.5

Voltage [V]

Voltage [V]

3

Reference input Reference output

2

EXPERIMENT #1

VALIDATION

20

40

60

Time [T]

80

0

100 (d)

5

10

15

20

Time [T]

Figure 5.6: Data used in the numerical experiments with a NARX neural network and the results of those experiments: (a)–(d) the exact values of u1 ðt Þ (black lines), the exact values of u2 ðt Þ (red lines), the estimated values of u2 ðt Þ (blue dashed lines).

25 cf. the article “Design Time Series NARX Feedback Neural Networks” in the MATLAB documentation, available at https://www.mathworks.com/help/nnet/ug/design-time-series-narx-feedbackneural-networks.html?requestedDomain=www.mathworks.com [2018-06-30].

5.5 Cognitive status of mathematical modelling

79

At first glance, the use (in the above example) of a neural network for modelling the RC filter from Example 5.8 does not bring any advantage in terms of accuracy or computational complexity. This impression may change if the modelled filter is composed of 100 capacitors and 100 resistors; the complexity of the white-box model would then increase significantly (from a single first-order differential equation to 100 such equations), while the complexity of the black-box computational model could remain unchanged or grow only imperceptibly. From methodological (philosophical) point of view, it is important to note that the training of a neural network (or another ML tool) is an inductive process which as such cannot guarantee the 100% performance (reliability) of the trained network (or another ML tool). Since the probability of the reliable behaviour of the trained ML tools may be increased unlimitedly, provided the access to new experimental data is unlimited, those tools are applied today in such demanding projects as monitoring of human health or safety of aircrafts. As a rule, the data used for training ML tools are uncertain, i.e., subject to systematic and random errors. This means, on the one hand, that some informational redundancy is desirable; on the other hand – that its introduction may increase the risk of the model overfitting which consists in the adjustment of the model to the image of the SuMo distorted by those errors26.

Example 5.11: An attempt to increase the accuracy of modelling the RC filter from Example 5.8 by means of a NARX Feedback Neural Network with increased number of hidden neurons (model parameters), when the training data are subject to considerable random errors, may imply the noisetype distortions in the responses of the trained network to inputs even slightly different from those used for training.

5.5 Cognitive status of mathematical modelling 5.5.1 Adequacy and accuracy of mathematical models By the very concept and potential utility of mathematical modelling, only some aspects of the SuMo are represented by any, even very sophisticated, model.

26 P. A. Flach, “On the state of the art in machine learning: A personal review”, Artificial Intelligence, 2001, Vol. 131, pp. 199–222.

80

5 Mathematical modelling

Example 5.12: Depending on the needs, a cone staying on a floor may be modelled by means of a circle on the floor surface (x-y plane) or by means of a triangle on the wall surface (y–z plane) – both being the projections of the cone. Other geometrical shapes are possible if it is oriented differently, but in any case the three-dimensional shape of the cone may be reconstructed if three projections (on three linearly independent planes) are available. In the case of non-geometrical modelling, the number of dimensions may be greater, and we do not know a priori how many of them would be sufficient.

The mathematical model of a SuMo – being a real object, phenomenon, event or process – is always underdetermined by the available data. Thus, the problem of mathematical modelling is always underdetermined because in any non-trivial case infinitely many equivalent models may be obtained, the models differing both in structure and in parameter values.

Example 5.13: A sequence of empirical data may be interpolated by infinitely many functions. The set of admissible functions, satisfying the interpolation conditions, may be constrained if some a priori information on the SuMo is available. In Figure 5.7, five functions interpolating a very simple sequence of time samples of an electromagnetic signal: f < 0, 3 > , < 2, 3 > , < 4, 3 > , < 6, 3 > , < 8, 3 > , < 10, 3 > g are shown; many more may be imagined. One can, however, exclude some of them when knowing that the instantaneous power of a modelled signal cannot exceed Pmax which may be estimated on the basis of some theoretical premises, or that the frequency bandwidth of that signal must be narrower than fmax for some physical reasons.

7

Signal magnitude

6

5

4

3

2 1

0

1

2

3

4

5 Time

6

7

8

9

Figure 5.7: Five functions interpolating a time sequence of samples (marked with circles) of an electromagnetic signal.

10

5.5 Cognitive status of mathematical modelling

81

In general, the mathematical model is identified on the basis of a set of measurement data acquired on SuMo, the set which – as a rule – contains only a fraction of potentially available information on SuMo. Any change of that set may imply a change in the model: the change in the values of its parameters, even if its structure remains unchanged. The sensitivity of the parameters to the changes in the data, used for model identification, may serve as a criterion for designing experiments aimed at acquiring the sets of data rich enough to be considered representative of SuMo. The use of such sets for parametric identification will still generate parameters subject to certain dispersion, but its magnitude may be kept within tolerable limits of uncertainty. It follows from the general scheme of mathematical modelling, shown in Figure 5.3, that its cognitive status critically depends on the cognitive status of measurement since the latter is a source of data for parametric identification and for the assessment of the model. According to a famous opinion expressed by the Scottish-Irish physicist William Thomson27 (1824–1907): […] when you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.28

This naïve and optimistic view is quite frequently repeated also today, usually with the intention to stress the importance of measurement science and technology. It will be, however, shown in the next chapter that the whole measurement enterprise is based on a number of international conventions concerning the system of fundamental quantities, their units and their institutional control. It will be also shown that the system of all measurement quantities is based on mathematical models of the relationships among them, which means that the cognitive status of measurement critically depends on the cognitive status of mathematical modelling. A SuMo whose reaction to external stimuli depends not only on those stimuli but also on the history of its functioning is called dynamical system. It cannot be adequately modelled with a set of algebraic equations even if they include a variable modelling time: a set of the difference or differential equations is necessary for this purpose. Less evident is insufficiency of algebraic polynomials for modelling time-independent relationships between quantities.

27 since 1866 known as lord Kelvin. 28 W. Thomson, Popular Lectures and Addresses – Vol. I, MacMillan & Co., London – New York 1889, https://archive.org/details/popularlecturesa01kelvuoft [2017-06-25], p. 73.

82

5 Mathematical modelling

Example 5.14: In Figure 5.8, the result of least-squares approximation of the dependence of the resistance R on temperature in an NTC thermistor from Example 5.7, by means of an algebraic polynomial of order 6, is shown. It has turned out that neither a decrease nor an increase in the order of the polynomial may improve the accuracy of approximation. It cannot be also improved by increasing the number of data points used for approximation.

104

Resistance [Ω]

Reference Approximation

103

102

101 0

20

40

60

80 100 Temperature [°C]

120

140

160

180

Figure 5.8: Result of least-squares approximation of the dependence of resistance on temperature in an NTC thermistor.

It is also not evident that even if parametric identification of a model is based on the data free from systematic errors – i.e., obtained by means of properly calibrated measuring instruments – the resulting estimates of the model parameters may be biased.

Example 5.15: The parametric identification of the model u = Ri (from Example 5.4) consists in estimation of the value of the resistance R on the basis of N pairs of the corresponding values of the ~n (for n = 1, ..., N) – the latter current i and voltage u, obtained by means of measurement: ~in and u ~n , respectively. The least-square method, when applied being subject to random errors Δ~in and Δu for this purpose, yields: N P ~in u X ~n N  2  n ^LS ðNÞ = argR inf ~n − R~in = =N1 u R P ~2 n=1 in n=1

It may be shown that if Δ~in are realisations of identical zero-mean independent random variables with the variance σ2i , then:

5.5 Cognitive status of mathematical modelling

^LS ðNÞ R

 ! N!∞

83

 N P _2 lim N1 in i2 N!∞  Nn = 1 R_ ≤ R_ 2 max 2 < R_ P _2 imax + σ i lim N1 in + σ 2i N!∞

n=1

where i_n is the (unknown) exact value of ~in , and imax is the largest value of the current used for model identification. This means that even unlimited increase in the number of data pairs, ~in and ~n , may not suffice for obtaining the error-free estimate of the resistance. If the variances of the u ~ n – σ2i and σ2u , respectively – are known a priori, then the bias may be errors in the data ~in and u avoided by using a more complex method of estimation, e.g., an appropriate version of the total least-squares method29, viz.: ( ^TLS ðNÞ = argR inf R

N  2 X 1 ~n − R~in u 2 σ2u + σ i R2 n = 1

)

Alternatively, this bias may be neutralised by averaging the voltage and current data:

^ ðNÞ = R

1 N 1 N

N P n=1 N P n=1

~n u

1 N

= ~i n

1 N

N P n=1 N P n=1

~n Þ ðu_ n + Δu   = i_n + Δ~in

R_

 1 N 1 N

N N P P ~n Δu i_n + N1

n=1 N P

n=1

N P Δ~in i_n + N1

n=1

!R_ N!∞

n=1

^ðNÞ. The symbol u_ n , in the above formula, but at the cost of larger variance of the estimates R ~n . stands for the (unknown) exact value of voltage whose measured value is u

The cognitive status of mathematical modelling depends on the reliability of some key assumptions it is based upon, such as the following: – The SuMo may be satisfactorily described in terms of its parts or features (considered to be significant) and relations among them (the so-called system paradigm). – There is some time-invariant regularity in the behaviour or the SuMo (the socalled uniformity paradigm). But the fundamental difficulty is related to the relationship of the model to physical reality.

5.5.2 Mathematical models and instrumentalism The cognitive status of mathematical models is differently assessed from the realist and anti-realist point of view. All what was said about scientific realism in Chapter 4 applies to mathematical models as being parts or equivalents of scientific theories. If anti-realism is concerned, its most widespread version, called instrumentalism, is of importance in this context, and therefore will be discussed in this subsection.

29 I. Markovsky, S. van Huffel, “Overview of total least-squares methods”, Signal Processing, 2007, Vol. 87, pp. 2283–2302.

84

5 Mathematical modelling

According to instrumentalists, scientific theories should be seen as (useful) instruments for the organisation and application of knowledge, as well as for classification and prediction of observable phenomena. Two forms of instrumentalism should be distinguished: syntactic and semantic30. Syntactic instrumentalism treats the theoretical claims of theories as syntactic-mathematical constructs which do not meet truth conditions, and therefore lack any assertoric content. Semantic instrumentalism takes theoretical statements to be meaningful, but only if they are fully translatable into assertions involving only observational terms. The syntactic instrumentalism fails to explain how scientific theories can be empirically successful in novel predictions if theories fail to describe (even approximately) an unobservable reality. The major problem associated with semantic instrumentalism is that theories have excess content over their observational consequences in the sense that what they assert cannot be fully captured by what theories say about the observable phenomena. Constructive empiricism, developed by Bastiaan C. van Fraassen (cf. Section 4.4), is considered to be the most influential contemporary version of instrumentalism. It may be roughly outlined as follows: scientists are not expected to believe that any theory is true; what matters for them is the veracity of all (past, present and future) theory-based observable consequences31. The most popular argument in favour of instrumentalism is called argument of pessimistic induction. According to its proponents, from the point of view of today’s science, almost all sophisticated theories established more than, let’s say, 50 years ago, can be seen to be false; but if all past theories are found in some way incorrect, one may reasonably infer that all or almost all present theories will be found wrong in the second half of the twenty-first century. According to the instrumentalists, despite this discontinuity in the history of theories, there has been a steady and cumulative growth in the scope and accuracy of their observable predictions; they have become increasingly better at describing the phenomena, their only proper task. There is no agreement on whether the epistemological stance, realism or instrumentalism, has any impact on research methodology and its productivity32. It is, however, evident that this stance may have some influence on the language used for research description. The following paragraphs are devoted to exemplification of this conclusion in reference to basic concepts of mathematical modelling. Mathematical model. For realists, the mathematical model of a SuMo is its description in terms of mathematical structures, which provides true, partially true or approximately true information about this SuMo. Since it contains truth, it may be used as a valid basis for justified statements about reality. Therefore, realists are inclined to speak about identification of the SuMo rather than about identification 30 S. Psillos, Philosophy of Science A–Z, 2007, pp. 123–124. 31 cf. B. C. van Fraassen, The Scientific Image, Oxford University Press, Oxford (UK) 1980. 32 cf. R. F. Hendry, “Are Realism and Instrumentalism Methodologically Indifferent?”, http://www.philsci.org/archives/psa2000/realism-and-instnjmentalism.pdf [2017-06-03].

5.5 Cognitive status of mathematical modelling

85

of its model. Consequently, mathematical modelling is for realists a sequence of operations aimed at determination of the structure of the SuMo and measurement of its parameters. Realists acknowledge that any model reflects only some aspects of the SuMo, but stress that this is due to the limitation of the cognitive means rather than to an arbitrary decision of the model designer. Realists acknowledge that the cognition of the SuMo is always limited and uncertain, but they point out that, by being constantly improved, it may asymptotically approach truth. For instrumentalists, the mathematical model of a SuMo is a mathematical formalism that enables one to approximately predict the behaviour of the SuMo under various conditions in order to use it for various practical purposes. The procedure for identification of the mathematical model of a SuMo is a sequence of operations aimed at selection of an adequate structure of the model (structural identification of the model), and estimation of its parameters (parameter identification of the model). Instrumentalists clearly state that the model reflects only some aspects of the SuMo or some of its properties, viz. those which are important for potential (intended) applications of the model. They are inclined to avoid any declarations on the relationship between the model and reality, following the Ludwig J. J. Wittgenstein’s recommendation: “Whereof one cannot speak, thereof one must be silent”33. Structural identification. Realists and instrumentalists agree that structural identification can hardly be organised as an algorithmic procedure, but they draw different conclusions from this observation. Realists prefer white-box modelling because according to their conviction, the structure of the model should be derived from the knowledge about the SuMo, of its structure and other features. Instrumentalists are more open to the use of black-box models and claim that as a rule the choice of the structure of the model may be based on some intuitive premises, on anterior experience or trial-and-error methodology – not necessarily on the knowledge of the internal structure of the SuMo. Parameter identification. Regardless of whether the black-box approach or whitebox approach is used for structural identification, parameter identification must be directly or indirectly based on the measurement data. Nevertheless, the understanding of this operation by realists and instrumentalists is different. Realists claim that the parameters of the model – if it is properly designed during structural identification – are preferably physical quantities that should be directly measured rather than computed on the basis of measurement results. Instrumentalists are not so much interested in the nature of the parameters, but rather in their numerical influence on the model behaviour. This difference is consistent with the realists’ preference for white-box models and the instrumentalists’ preference for black-box models. Truth-related issues. For realists, the mathematical model of a SuMo is a form of knowledge about this SuMo containing the elements of objective truth. Realists

33 L. Wittgenstein, Tractatus Logico-philosophicus, 1922, § 7.

86

5 Mathematical modelling

believe that by consecutive improvements the model may unlimitedly approach reality. Thus, they implicitly assume the existence or possibility of a perfect model. They accept, of course, the fact that the model of a SuMo yields only an approximate prediction of its behaviour and properties, but they are inclined to explain this fact by the imperfection of our cognitive capabilities. Instrumentalists – since they avoid any declarations on the relationship of the model to reality – put emphasis on its ability to meet requirements concerning its usefulness for a predefined purpose. Both realists and instrumentalists accept the fact that the model structure is always up to certain degree inadequate, and that the model parameters are always determined inaccurately. Realists are inclined, however, to attribute the inadequacy of the structure of the model to the limited cognition of the modelled SuMo, in particular to neglecting some factors, important for the phenomena in the SuMo or for its properties, during the choice of the quantities modelling the SuMo (input, output and influence quantities) or inappropriate specification of those quantities. Instrumentalists focus their explanation on the choice of the structure of the model, which is inappropriate from a mathematical point of view. Realists and instrumentalists agree that the estimates of the parameters of the model are uncertain due to the errors introduced by the method of parameter identification and its technical implementation, as well as due to the errors corrupting the data used for identification. Instrumentalists easily accept the fact that in practice the assessment of that uncertainty may be done only by comparison of the model under consideration with an extended model, not with reality. Consequently, they avoid the term model verification since it is derived from Latin adjective verus which means “true”. Realists are always inclined to look for an absolute reference. A difficult problem for those realists who speak about true and false models is the question about the indicators of uncertainty (their definitions and values) which may be used as criteria for discrimination of those two categories of models. Example 5.16: Let’s assume that the circular cross-section of a pivot is to be modelled by means of a regular polygon. The intended model has two parameters, viz. the number of sizes K and the length of each of them a. There are infinitely many methods for estimation of those parameters on ~ For an a priori chosen value of K = K, ^ one the basis of a measured value of the pivot’s diameter d. may determine an estimate of a, e.g., by making equal: ~ ≡ π d; ~ or ^ aÞ ≡ Ka ^ and the circumference of the pivot CðdÞ – the circumference of the polygon CðK,   1 2 − 1 π ^ ^ and the surface of the cross-section – the surface of the polygon SðK, aÞ ≡ 4 Ka tan K^ ~ ≡ 1 πd ~2 . SðdÞ 4 The results of estimation will be different: ^C = a

π~ ^S = d and a ^ K

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  π π ~ d tan ^ ^ K K

5.5 Cognitive status of mathematical modelling

87

^ = 3 to ca. 0.99 for K ^ = 13. Instead of those respectively; their ratio is growing from ca. 0.78 for K elementary methods of estimation, one may apply a more sophisticated variational approach. One may determine, e.g., a minimum value K^ of the integer parameter K, which enables the correspond^ of the parameter a to satisfy one of the following conditions: ^ðKÞ ing estimate a ^ ~ ≤ ΔC or SðK, ~ ≤ ΔS ^ aÞ − SðdÞ CðK, aÞ − CðdÞ or both of them. In the above equation, ΔC stands for admissible discrepancy between the circumference of the polygon and the circumference of the pivot, while the symbol ΔS for admissible discrepancy between the surface of the polygon and the surface of the cross-section. Without reference to the target application of the mathematical model of the pivot, it is impossible to say which version of this model is better, and a fortiori to say which of them is true or false.

The 2017 review of the role of modelling in scientific explanation, authored by the American philosopher of science Alisa Bokulich (*1971)34, is referring to numerous sources whose authors are very much attached to the concepts of “true” models and model-based explanations, sometimes even “entirely true explanations” or “genuine explanations”. Except for some special contexts where the use of such expressions is intentionally provocative (as in the title of the book How the Laws of Physics Lie35), it is difficult to accept the predicate “true model”, even in the case of a material (physical, biological, etc.) model of a material (physical, biological, etc.) entity. Example 5.17: Animal experiments are widely used to develop new medicines and to test the safety of other products. Non-human primates (most frequently macaques) are used as biological models of human organisms in pharmacological studies, in behaviour and cognition studies and in genetics. Despite 98% similarity of the human genome and the genome of the most advanced non-human primates, the extrapolation of the results of studies made on them on human beings is subject to non-negligible uncertainty. In this case, however, nobody is inclined to say that the biological model of a biological object is “false”, but rather to use the adjectives such as “imperfect” or “insufficient”.36

For the sake of convenience, when writing the texts like this handbook, one may combine the instrumentalist’s point of view with the use of realist’s language of presentation (except for specific contexts where this practice could be detrimental to the message conveyed). Although the opposition of realism and instrumentalism is of fundamental importance for philosophers in general and philosophers of science in particular, it seems to have a limited impact on the research practice: it manifests

34 A. Bokulich, “Models and Explanation”, [in] Springer Handbook of Model-Based Science (Eds. L. Magnani, T. Bertolotti), Springer, Cham (Switzerland) 2017, pp. 103–118. 35 N. Cartwright, How the Laws of Physics Lie, Oxford University Press, Oxford (UK) 1983. 36 inspired by: C. E. Stinson, Cognitive Mechanisms and Computational Models: Explanation in Cognitive Neuroscience, Ph.D. Thesis, School of Arts and Sciences, University of Pittsburgh, Pittsburgh 2013, Section 5.4.2.1.

88

5 Mathematical modelling

itself in the language of reports and publications rather than in experimentation procedures. Since the defence of realism is very difficult, it is easier to speak about research methodology from instrumentalist’s point of view. On the other hand, the language of realists is simpler than that of instrumentalists. The former would say, for example, that “the current in a resistor is proportional to the voltage on it”, while the latter – “the relationship between the current in a resistor and the voltage on it may be adequately modelled by means of a linear algebraic equation without intercept”, where the adverb “adequately” refers to the purpose of modelling. The realist’s way of speaking is closer to the everyday language of electrical engineers; therefore, if there is no risk of imprecision or misunderstanding, the convinced instrumentalists use it in everyday research practice. For an instrumentalist, a theory is sufficient for a given purpose, not in abstracto. Thus, the instrumentalist’s narrative seems to be intellectually safer (in the sense of the French saying “En cas de doute, s’abstenir” which means “If in doubt, abstain”). Paradoxically, the research practitioners seem to act as instrumentalists, speak as realists and declare their attachment to realism – quite frequently naïve realism. During an informal opinion poll made by the author during a 2003 conference, a vast majority of research practitioners in the domain of measurement science and technology identified themselves with realism while their articles revealed their pragmatic instrumentalism in approaching research problems.

6 Measurement In technoscience, measurement is usually understood as a process of acquiring quantitative data by empirical means. It should be noted that in some domains of technoscience, e.g., in control theory, as well as in some areas of philosophy of science, this concept is used interchangeably with the concept of observation. In this book, however, the latter is understood exclusively as a process of acquiring qualitative data by means of both sensorial apparatus and its technical extensions such as microscopes or telescopes.

6.1 Basic concepts of measurement science According to the everyday understanding, a measurement is an act of assigning a number to a selected property of an object, of a phenomenon or of an event. This number must be, however, associated with a unit being specific for that property; so, 1 kg is a valid result of measurement of the mass of sugar, and 37.6 °C – of the temperature of a human body. A brief synopsis of the historical development of the concept of measurement may be found in the 2004 article of the Australian philosopher of engineering Timothy L. J. Ferris, where the following eclectic definition of measurement is proposed: “Measurement is an empirical process, using an instrument, effecting a rigorous and objective mapping of an observable into a category in a model of the observable that meaningfully distinguishes the manifestation from other possible and distinguishable manifestations”1. According to the internationally agreed normative document entitled International Vocabulary of Metrology – Basic and General Concepts and Associated Terms (called VIM hereinafter), measurement is a “process of experimentally obtaining one or more quantity values that can reasonably be attributed to a quantity”2, where quantity is a “property of a phenomenon, body, or substance, where the property has a magnitude that can be expressed as a number and a reference”3. A measurement unit is defined there as a “real scalar quantity, defined and adopted by convention, with which any other quantity of the same kind can be compared to express the ratio of the two quantities as a number”4. From the today’s science point of view, such understanding of measurement is not general enough to meet the needs of

1 T. L. J. Ferris, “A New Definition of Measurement”, Measurement, 2004, Vol. 36, No. 1, pp. 101–109. 2 International vocabulary of metrology – Basic and general concepts and associated terms (VIM3), Joint Committee for Guides in Metrology (BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP and OIML), 2008, definition #2.1. 3 ibid., definition #1.1. 4 ibid., definition #1.9. https://doi.org/10.1515/9783110584066-006

90

6 Measurement

contemporary scientific research. Seeing measurement as a special case of the parametric identification of a mathematical model seems to be the outcome of the development of measurement theory during the last 150 years. Although the history of measurements and measures may be traced back to the ancient Egypt and Mesopotamia, the first mature attempts of theoretical approach to measurement appeared only in the second half of the nineteenth century. In 1887, the German physician and physicist Hermann L. F. von Helmholtz published a small treaty “Zählen und Messen erkenntnisstheoretisch betrachet”5 which initiated a stream of works aimed at the development of an axiomatic theory of measurement. Those works reached maturity in the period of 20 years separating the publication of a seminal article “A Set of Independent Axioms for Extensive Quantities”6 (1951) – authored by the American philosopher Patrick C. Suppes (1922–2014) – and the appearance of the first volume of a fundamental handbook Foundations of Measurement7 (1971). The latter initiated a large-scale synthesis of the so-called representational theory of measurement, the synthesis completed only in 1989–1990 when two consecutive volumes of that handbook appeared 8,9. In the representational theory of measurement, measurement is defined as the construction of a homomorphism from an empirical relational system     E; r1E , r2E , ... into a numerical relational system N; r1N , r2N , ... . The empirical relational system consists of a set of empirical objects along with certain qualitative relations among them, while a numerical relational system – of a set of numbers and specific mathematical relations among them. Example 6.1: In 1812, the German geologist and mineralogist Carl F. C. Mohs (1773–1839) proposed an ordinal scale of mineral hardness. It is based on the ability of one natural sample of mineral to scratch another mineral visibly. The samples of matter used by Mohs are different minerals found in nature: talc (1), gypsum (2), calcite (3), fluorite (4), apatite (5), orthoclase (6), quartz (7), topaz (8), corundum (9) and diamond (10). The hardness of any other material is measured against that scale by finding the hardest material that the given material can scratch, or the softest material that can scratch the given material. In this case, the empirical relational system consists of the set: E ≡ ftalc, gypsum, calcite, fluorite, apatite, orthoclase, quartz, topaz, corundum, diamondg

5 H. von Helmholtz, “Zählen und Messen, erkenntnistheoretisch betrachtet”, [in] Philosophische Aufsätze, Eduard Zeller zu seinem fünfzigjährigen Doctorjubiläum gewidmet, Fues’ Verlag, Leipzig 1887, pp. 17–52. 6 P. Suppes, “A Set of Independent Axioms for Extensive Quantities”, Portugaliae Mathematica, 1951, Vol. 10, pp. 163–172. 7 D. M. Krantz, R. D. Luce, P. Suppes, A. Tversky, Foundations of Measurement, Vol. 1 (Additive and Polynomial Representations), Academic Press, New York 1971. 8 P. Suppes, D. M. Krantz, R. D. Luce, A. Tversky, Foundations of Measurement, Vol. 2 (Geometrical, Threshold, and Probabilistic Representations), Academic Press, New York 1989. 9 R. D. Luce, D. M. Krantz, P. Suppes, A. Tversky, Foundations of Measurement, Vol. 3 (Representation, Axiomatization, and Invariance), Academic Press, New York 1990.

6.1 Basic concepts of measurement science

91

and a set of empirical relations, which contains the equality of mineral hardness (r1E ) and the strict inequality of mineral hardness (r2E ) – both checked by the scratching test. The corresponding numerical system is composed of: N ≡ f1, 2, 3, 4, 5, 6, 7, 8, 9, 10g and two relations: =

(r1N )

and
; y2 − f2 ðz2 ; p2 Þ = 0 It can be transformed into the inverse model: x = f1− 1 ðy1 ; p1 Þ  f2− 1 ðy2 ; p2 Þ but not into an explicit forward model.

One of the operators, C or R , must be determined in advance, during calibration, on the basis of a set or sets of reference data: n o n o ~ cal ~ cal = x ~ cal ðtÞ, y ~cal D ð t Þ 2 X ð T Þ x n that cover the space of variation of xðtÞ, denoted by the symbol X ðTÞ. The uncertainty of those data, which is on the whole unavoidable, contributes to the uncertainty of the result of calibration. 6.3.2 Development step #2 Let’s move to the second step of the meta-model presentation, i.e., to its generalisation on measurement situations where neither influence quantities nor signals controlling the responses of the SuM can be neglected. First, a generalised influence quantity V, which may have an impact on the behaviour of both the SuM and the MS, should be added. According to the VIM, influence quantity is a “quantity that, in a direct measurement, does not affect the quantity that is actually measured, but affects the relation between the indication and the measurement result”39. It is 39 ibid., definition #2.52.

104

6 Measurement

influencing the raw result of measurement, but – in contrast to disturbances – its value may be estimated and taken into account in the process of measurand recon^ of the struction because it is measured or controlled. To produce an estimate X measurand X, the MS is acquiring two signals: a signal which is carrying information on the measurand X and a signal which is carrying information on the generalised influence quantity V. It is, moreover, generating two signals: a signal SX , which is exciting the SuM to provoke a desirable manifestation of the measurand X, and a signal SV , which is controlling it to create a desirable state of the generalised influence quantity V. The generalised formulation of the meta-model will be restricted to the most typical measurement situations when V ≡ v, Sx ≡ fsn g and Sv is absent. The diacritical signs such as hats, tildes and dots over the corresponding signals will be used in the same way as for xðtÞ and yn . Under those assumptions, the generalised model of reconstruction takes the form: ^, fs_ n g;pR  ^ ðtÞ = R ½fy ~n g, v x ^ is the estimated (measured) value of the vector of influence quantities, and where v fs_ n g is the exact value of the control signal. The operator of measurand reconstruction R is now an inverse, approximate inverse, partial inverse or pseudoinverse of the forward model of conversion: _ fs_ n g, fy_ n g;pC  = 0 C ½x_ ðtÞ, v, One of the operators, C or R , must be determined in advance, during calibration, on the basis of an enhanced set of reference data: n o n o n o cal cal cal cal ~ cal = x ~ ~ ~ ~ cal ðtÞ, v ~ cal , ~scal ~ , y x ð t Þ 2 X ð T Þ, v 2 V , s 2 S D n n n n that cover the spaces of variation X ðTÞ,V and S of the generalised variables xðtÞ, v and s, respectively. Again, the uncertainty of those data, which is on the whole unavoidable, contributes to the uncertainty of the result of calibration.

6.3.3 Development step #3 The last step of the meta-model generalisation is aimed at inclusion of noncanonical measuring systems, in particular on analogue measuring systems (as opposed to digital measuring systems, being up to now considered in this section), and on systems for measuring non-physical (economic, social, psychological, etc.) quantities. Let’s start with two simple examples: measurement of

6.3 Mathematical meta-model of measurement

105

current by means of a d’Arsonval-Weston galvanometer and measurement of the intelligence quotient (IQ). Example 6.5: The electromechanical core of a galvanometer is composed of a small pivoting coil of wire in the field of a permanent magnet. The coil is attached to a thin pointer that traverses a calibrated scale. A little torsion spring pulls the coil and pointer to the zero position. When a direct current (X ≡ i) flows through the coil, then the coil generates a magnetic field which acts against the permanent magnet. The coil twists, pushing against the spring, and moves the pointer: the angular deflection (Y ≡ α) of the pointer is approximately proportional to the current. By visual comparison of the position of the pointer with the calibrated scale, the user of the galvanometer is able to read the final result of measurement, i.e. an approximate value ^i of the measured current. This process may be viewed as a sequence of two transformations: – the conversion of an electrical signal (current) whose value is to be measured into an optical signal (an image of the deflected pointer), performed by an electromagnetic coil of the galvanometer; – the reconstruction of the measurand on the basis of visual comparison of the image of the pointer with the image of the scale, performed by the user of the galvanometer. Such a decomposition of the measurement process enables one to apply the developed metamodel to the analysis and design of a galvanometer.

It should be noted that in the above example, the user acquires the final result of measurement by optical image recognition, viz. he is recognising the image composed of the pointer and the calibrated scale. It may seem at first glance that this operation is absent in digital measurements; in fact, it is only simpler and more reliable: the user has to recognise sequences of digits which appear on the screen or on an indicator. The number of digits is limited to 10, but the number of their shapes is unlimited. Example 6.6: IQ is an indicator of the intellectual potential of a person, used in psychometrics. Due to the lack of a satisfactory definition of intelligence, it cannot be considered as its measure, but rather as a relatively independent quantity defined by the method for determining its values, i.e., by standardised tests developed for this purpose. IQ is used as a predictor of educational achievements of a person or of his job performance, due to its empirically confirmed correlation with some intellectual capacities. A standardised test, used for IQ measurements, is usually composed of binary- or multiple-choice questions; a point weight is attributed to each question. The test has a mean score of 100 points and a standard deviation of 15 points. It means that 68% of the population score an IQ within the interval 85–115, and 95% within the interval 70–130. The IQ score of an individual is correlated with such factors as the social status of his parents; thus, those factors play the role of influence quantities. This process of IQ measurement may be viewed as a sequence of two transformations: the “conversion” of the intellectual abilities of an individual into a set of test scores, and the “reconstruction” of IQ value by numerical aggregation of those scores. Again, such a decomposition of the measurement process enables one to apply the presented meta-model to its analysis in terms of mathematical tools used in measurements of physical quantities.

106

6 Measurement

The measurement of IQ is an example of weakly defined measurements40 which are labelled with this name because of the absence of a mathematical model underlying the definition of the measurand. The majority of measurements in social sciences (especially in psychology and sociology) belong to this category. The meta-model of measurement, presented here, does apply to them, and may be used for their interpretation: the Bayesian framework enables one to treat them in the same way as strongly defined measurements characteristic of natural sciences. Let’s consider a relatively simple, but easily generalisable case, of a scalar integer-valued measurand x (such as this modelling IQ) and a raw result of measurement y being a vector of binary scalar variables (such as those modelling yes-no answers to the IQ test). Let’s further assume that x is a realisation of a scalar random variable whose probability distribution function is PrðxÞ, and y is a realisation of a scalar random variable whose probability distribution function is PrðyÞ. The corresponding mathematical model of the conversion has the form of the conditional probability distribution function Prðy jxÞ. The measurand reconstruction consists in its inversion according to the Bayes formula: Prðx jyÞ =

Prðy jxÞ PrðxÞ PrðyÞ

This conditional probability distribution function enables one to determine both the final result of measurement ^x (e.g. an estimate of the expected value of the random variable modelling a quantity to be measured) and an indicator of its uncertainty (e.g. an estimate of its standard deviation). This is possible, provided the probability distribution functions which appear in the Bayes formula have been determined (estimated) during calibration based on the reference data. Examples 6.5 and 6.6 show the potential behind the presented meta-model (logical framework) to cover much broader class of measurements than those performed by canonical measuring systems. A key decision enabling the adaptation of the meta-model, developed for canonical measuring systems, consists in the proper choice of the quantity considered to be the raw result of measurement. This quantity should belong to the domain of “easily interpretable phenomena” such as visual signals in the times of analogue measuring instruments or digital electrical signals today. Both examples show the importance of a “common denominator” of all measurements, viz. comparison with the standards, whose effectiveness depends on calibration. The general version of the proposed meta-model of measurement is shown in Figure 6.3; the meaning of the symbols used there is summarised in Table 6.3.

40 L. Finkelstein, “Widely, strongly and weakly defined measurement”, Measurement, 2003, Vol. 34, pp. 39–48.

6.3 Mathematical meta-model of measurement

107

System under measurement MM of system under measurement

X

Sx Sv

V

MM of conversion

Yv

Yx

MM of reconstruction

Measuring system V

X

MM of conditioning

MM of interpretation

Φ[X]

MM of control

Signals of external control Figure 6.3: General version of the proposed meta-model of measurement.

Table 6.3: List of symbols used in Figure 6.3. X

measurand, i.e. generalised quantity intended to be measured

YX

signal carrying information on the measurand

SX

signal exciting the SuM to provoke a desirable manifestation of the measurand

X^

estimate of the measurand

^ ϕ½X

function or functional of the estimate of the measurand

V

(generalised) influence quantity

YV

signal carrying information on the influence quantity

SV

signal exciting the SuM to provoke a desirable manifestation of the influence quantity

V^

estimate of the influence quantity

108

6 Measurement

^ V, YV , SV and V ^ denote generalised variables as defined The symbols X, YX , SX , X, in Subsection 6.3.2. The meta-model from Figure 6.3 differs from that of Figure 6.2 in the extension of the conversion-reconstruction function to the influence quantities, and in the appearance of some additional functions: interpretation, condition^ of the measurand X is usually ing and control. The interpretation of an estimate X aimed at providing the MS user with some conclusions, of quantitative or qualita^ by using algorithmic means. It can be, for tive nature, that may be inferred from X example, a binary (success-failure) result of testing a manufactured device. The conditioning is aimed at establishing the required measurement conditions by generating signals SX and SV that influence the SuM, e.g., stabilise its temperature. The control function is not directly related to processing of measurement information, but is indispensable for proper communication of the MS with its user, and for coordination of all other functions of the MS.

6.4 Interpretation of key concepts of measurement science in terms of mathematical modelling In this section, three fundamental concepts of metrology – viz. measurand, calibration and uncertainty – are interpreted in terms of the mathematical meta-model of measurement, presented in the previous section.

6.4.1 Measurand The stable functioning of modern society largely depends upon measurements: measuring is indispensable not only in technoscientific research but also in industry, in agriculture, in health care, in education,…, in everyday life. The effectiveness and reliability of many economic and social processes critically depends on what we measure and what is the uncertainty of the results of our measurements. The definition of any new measurand is, therefore, an act of not only pragmatic but also of moral importance.

Example 6.7: The operational definition of a measurand and the measurement method, underlying this definition, may significantly influence economic decisions based on the corresponding results of measurement. Here are two examples of such situations: – The energy consumption by a vacuum cleaner is measured for an empty dust bag, while during normal exploitation it is growing with the quantity of dust inside; so, the real energy consumption may be higher even by 40% than declared in the technical specification of a vacuum cleaner. – The energy consumption by a refrigerator is measured when its door is closed, while during normal exploitation it is significantly growing when the door is open.

6.4 Interpretation of key concepts of measurement science . . .

109

In the absence of international standards, the companies which may define testing procedures for their products, “optimise” them in such a way as to make the specifications of their products compliant with ecological requirements.41

In the language of the proposed meta-model, measurand is defined as a generalised variable which appears in a mathematical model of the SuM. An estimate of the (generalised) value of that variable is to be obtained as a result of measurement. Let’s consider three non-trivial examples, in which the mathematical model of the SuM is binding at least two variables. Example 6.8: A source of light may be modelled with a real-valued function x ðλÞ, where x is a scalar real-valued variable modelling light intensity, and λ – a scalar real-valued variable modelling wavelength. A discrete or continuous estimate of the function x ðλÞ, called optical (intensity) spectrum, is usually obtained by means of a measuring instrument called spectrophotometer.

Example 6.9: The relationship between constant current and constant voltage in an electrical circuit may be modelled with an algebraic linear equation: u = Ri, where u is a real-valued variable modelling voltage, i – a real-valued variable modelling current, and R – a parameter of the model called resistance. An estimate of the parameter R is obtained as a result of measurement performed by means of a measuring device called ohmmeter.

Example 6.10: The relationship between time-varying current, modelled by means of a scalar realvalued function iðt Þ, and time-varying voltage, modelled by means of a scalar real-valued variable uðt Þ, at the input of an amplifier may be modelled by a system of first-order ordinary differential equations which are linear and have constant parameters (SODE), where t is a real-valued variable modelling time. The input impedance of the amplifier is well defined if its operating point and load are standardised. The impedance may be represented by: – a set of complex numbers being its values corresponding to selected values of frequency, – a set of real-valued parameters of SODE, – a set of real-valued parameters of a scalar higher-order differential equation equivalent to SODE. Depending on the required representation of impedance, various measuring instruments and techniques may be applied for its measurement.

41 L. Grasberger, Wie die Industrie Testverfahren manipuliert, Sendung von 11. November 2017, Bayerischer Rundfunk, https://www.br.de/radio/bayern2/sendungen/iq-wissenschaft-und-for schung/abgasbetrug-und-oekoschwindel-wie-die-industrie-testverfahren-manipuliert-100.html [2018-07-23].

110

6 Measurement

6.4.2 Calibration In the language of the proposed meta-model, calibration may be defined as parametric identification of the mathematical model of conversion (forward-modelbased approach) or of the mathematical model of reconstruction (inverse-modelbased approach), on the basis of reference data representative of X, Y, V and S:   D cal = Xncal , Yncal , Vncal , Scal n j n = 1, 2, ..., N The first, more traditional approach consists in estimation of the parameters of the operator C , which are used in the reconstruction procedure, together with a priori information on the structure of this operator. Inversion, pseudoinversion, approximate inversion or partial inversion of the model of conversion during reconstruction is in this case inevitable. The forward-model-based approach is founded on the following premises: ^ ðtÞ, generated by R , should belong to the – The final result of measurement x space X ðTÞ. ^ C is an estimate of pC resulting from calibra^ C , where p ^ ðtÞ ... p – Its image C ½x tion, should be close to zero, but not necessarily exactly equal zero. – Optionally, it should meet some additional requirements reflecting available a priori information about the properties of the measurand (such as non-negativity, bounded support or band-limited frequency spectrum) derived, e.g., from physical phenomena underlying the functioning of the SuM. The second, more modern approach consists in direct identification of the reconstruction operator R , satisfying the above requirements, during calibration; the method of calibration is in this case at least partially determined by the chosen method of reconstruction. As a rule, this approach leads to more complex procedures of calibration than the forward-model-based approach; that’s why it has been widely implemented only recently, due to the increase in performance of computing means. The main idea of this approach is to estimate the parameters pR in such a way as to control the error of reconstruction, defined as R

hn o n i ^cal , s_ cal ~ calðtÞ ~cal ,v y n g; pR − x n

rather than the condition: h n o n o i cal ~ cal ðtÞ, v ~ ^ cal , s_ cal C x , y ; p =0 C n n

6.4 Interpretation of key concepts of measurement science . . .

111

A justification for using the criteria of calibration defined in the domain of the measurand may be found in many publications42. If the measurement problem is properly formulated, then the use of the inverse-model-based approach is always possible. On the contrary, the forward-model-based approach must be slightly modified in some cases, such as shown in Example 6.4. The modified approach is based on the assumption that the model of conversion, identified during calibration, is composed of two parts, viz. a model of the dependence of the measurand xðtÞ on some auxiliary (latent) variables fzn g and an invertible model of the dependence of   ~n on those variables. During measurement, the the raw result of measurement y second model is inverted, and the resulting estimate f^zn g of fzn g is used for com^ ðtÞ of xðtÞ by means of the first one. puting an estimate x

6.4.3 Measurement uncertainty The GUM is about the methodologies for treating the uncertainty of a scalar measurand (X ≡ x). Its generalisations on more complex measurands may go, depending on ^ ≡x ^ , the following the application, in various directions. If X ≡ x and consequently X indicators may be considered: – the mean and standard deviation or the limits of the distribution of the aggre^ − xkp ; gated absolute error of measurement kx – the mean and standard deviation or the limits of the distribution of the aggre^ − xkp =kxkp . gated relative error of measurement kx In both cases, various norms may be used, but the Euclidean norm (p = 2) and the Chebyshev norm (p = ∞) are chosen most frequently. Similar generalisations of the indicators of measurement uncertainty may be done in the case of X ≡ xðtÞ, X ≡ xðtÞ and X ≡ xðtÞ. Except for the measurement situation where the measurand is defined by convention, X remains unknown; therefore, the evaluation of uncertainty must be based on the use of two sets of mathematical models, viz. the set comprising basic models of the SuM, of conversion and of reconstruction; and the set of corresponding extended models. The basic models are those that are used for defining the measurand and for designing the MS. The corresponding extended models are

42 i.a., R. Z. Morawski, A. Podgórski, “Choosing the Criterion for Dynamic Calibration of a Measuring System”, Proc. 5th IMEKO TC4 Int. Symposium (Vienna, Austria, April 8–10, 1992); R. Z. Morawski, A. Podgórski, M. Urban, “Variational Algorithms of Dynamic Calibration Based on Criteria Defined in Measurand Domain”, Proc. IMEKO TC1-TC7 Colloquium (London, UK, September 8–10, 1993), pp. 311–316; L. Sandu, J. Oksman, F. G.A., “Information Criteria for the Choice of Parametric Functions for Measurement”, IEEE Transactions on Instrumentation and Measurement, 1998, Vol. 47, No. 4, pp. 920–924.

112

6 Measurement

provided with some additional features, representative of potential sources of uncertainty, e.g., with some additional random variables modelling those sources. Example 6.11: Let’s assume that the diameter (X ≡ x) of a golden coin is to be measured by means of a measuring system composed of a CCD line detector followed by an analogue-to-digital converter, a digital interface and a computer (cf. Figure 6.4). Let Y ≡ y be the digital code at the output of the converter. It is assumed that the basic model of the coin has the form: z1 = 2x cosð’Þ

)

z2 = 2x sinð’Þ

Golden coin

Measuring system Input interface

X

for ’ 2 ½0, 2π 

Mathematical model of optical transducer

(Positioning device + Light source + Optical transducer + DSP)

Y

Output interface

Operator of linear transformation

(CCD line detector+ADC)

X

Mathematical model of golden coin

Figure 6.4: Optical measurement of the diameter of a coin. The measurement procedure comprises the following steps: – The coin is fixed in the holder (the surface of the coin parallel to the surface of the CCD detector, its centre on the projection of the CCD detector centre) and illuminated. – The code y is recorded, and an estimate ^x of the diameter x is calculated using the formula ^x = p1 y + p0 , where p0 and p1 are parameters obtained during calibration. The extended model, corresponding to the assumed basic model, might have the form: z1 =

x

2  z2 = 2x

+ +





Δr sin 100 ’ cosð’Þ + Δz 1 2 2π

 Δr sin 100 ’ sinð’Þ + Δz 2 2 2π

) for ’ 2 ½0, 2π 

where Δr=2 is the depth of 100 engravings on the edge of the coin, and Δz1 and Δz2 are realisations of random variables modelling imperfections of the coin positioning. An estimate of the uncertainty component due to the “imperfection” of the basic model of the coin may be obtained by the worstcase evaluation of the difference between the measurement results corresponding to two models of the coin.

6.4 Interpretation of key concepts of measurement science . . .

113

The presented model-based approach to measurement seems to well integrate and unify design and implementation methodologies applied in various domains of measurement practice. Although it is far from being fully accomplished, it seems to provide useful and consistent answers to the questions about the role of mathematical modelling in defining measurands, in calibration of measuring systems and in evaluation of measurement uncertainty; moreover, it seems to be at least consistent with the ways of thinking developed in metrology of non-physical quantities. Its high heuristic potential enables a researcher or an engineer to significantly accelerate the conceptual works on solving a new measurement problem or on designing an experimental setup. More detailed examples of applications of that approach are provided in the author’s article of 201343. A broad overview of the roles of mathematical models in the design of measuring systems may be found in the threevolume Handbook of Measuring System Design44.

43 R. Z. Morawski, “An application-oriented mathematical meta-model of measurement”, 2013. 44 P. H. Sydenham, R. Thorn (Eds.), Handbook of Measuring System Design, Wiley & Sons, Hoboken (USA) 2005.

7 Scientific explanation 7.1 Institutional aims of science The major aim of science is generation of knowledge or rather pieces of knowledge: not only theories and laws but also semantic and mathematical models, catalogues of empirical facts and simple statements derived from such facts, as well as recipes for solving practical problems. Since the time of Aristotle, philosophers have continued to agree that a fundamental distinction should be drawn between two kinds of scientific knowledge: descriptive knowledge and explanatory knowledge1. Ancient astronomers already knew that each planet periodically reverses the direction of its motion with respect to the background of fixed stars, but only astronomers of modern times were able to explain why. Since methodologies for generation of knowledge refer to explanatory power of various pieces of knowledge, the idea of scientific explanation will be covered here before systematic presentation of methodologies for generation of knowledge provided in Chapter 8. Until the middle of the twentieth century, philosophy of science was mainly oriented on basic scientific disciplines with special focus on physics. The advent of technoscience revealed the urgent need for a broader approach encompassing not only basic and applied sciences, but also technological development. The boundaries separating those three areas have been blurred: their methodologies have been converging, pragmatic thinking about the aims of scientific investigation is dominating in respective communities of researchers and engineers.

7.2 Scientific explanation versus scientific prediction Philosophers of very diverse orientations continue to agree that the fundamental aim of science comprises scientific explanation of phenomena, events and processes subject to investigation or to practical implementation2. It is both a tool of research, necessary for testing hypotheses, and a tool of design – necessary for predicting the state, behaviour or properties of objects under development. Philosophers of science, however, significantly disagree about interpretation of this concept; the equivocal title of the 1989 essay “Four Decades of Scientific Explanation”, authored by the American philosopher of science Wesley C. Salmon

1 W. C. Salmon, Four Decades of Scientific Explanation, University of Minnesota Press, Minneapolis 1989, http://mcps.umn.edu/philosophy/13_0Salmon.pdf [2017-02-28], p. 3. 2 ibid., p. 4. https://doi.org/10.1515/9783110584066-007

116

7 Scientific explanation

(1925–2001), is well reflecting the state of surrounding it confusion3. First of all, scientific explanation should be distinguished from explication of science-related concepts, by logical empiricists considered to be a principal task of philosophy of science4. Next, one must carefully distinguish between offering an explanation for some fact X (“Why did X occur?”) and providing grounds for believing it to be the case (“Why should one believe that X occurred?”)5. One should, finally, take into account that the deductive-nomological schemes of scientific explanation (to be introduced in Section 7.3), despite structural similarity, are not the same as traditional hypothetico-deductive schemes of scientific confirmation (to be covered in Chapter 8)6. Explanation should not be confused with argumentation aimed at showing that something is (or will be) the case, rather than showing why or how it is (or will be). Such a confusion may happen because of the similar wording used for explanation and argumentation, as well as because an explanation may legitimately appear in an argument as one of its premises. Explanation of human behaviour should not be, moreover, confused with moral, legal or pragmatic justification of that behaviour. According to the Western intellectual tradition, to explain a phenomenon, an event or a process (possibly a class of phenomena, events or processes) means: to remove perplexity, or to change the unknown to the known, or to identify causes of events or phenomena. What is to be explained is called explanandum, and a set of explaining pieces of knowledge – explanans. The scientific explanation is an answer to a why-question. According to the Greek philosopher of science Stathis Psillos (*1965), the explanation of a fact (explanandum) is achieved by stating some causal or nomological links between it and the facts that are called upon to do the explaining (explanans). There are two broad approaches to explanation. According to the first, explanations are argumentative: to explain an event is to construct such a line of argumentation that the explanandum follows – logically or with high probability – from certain universal or statistical laws and initial conditions. Most typical species of this genus are deductive-nomological schemes of scientific explanation. According to the second approach, an explanation does not need to refer to any scientific law to be complete; it is enough that it specifies the relevant causal mechanisms. A view consistent with both approaches is that explanation has something to do with understanding and that understanding occurs when the explanandum is fit into an accepted causal-nomological framework.7

3 4 5 6 7

ibid. ibid., p. 5. ibid., p. 6. ibid., p. 7. S. Psillos, Philosophy of Science A–Z, 2007, p. 85.

7.2 Scientific explanation versus scientific prediction

117

An issue of importance, associated with the philosophical analysis of the concept of scientific explanation, is its relation to the concept of prediction. By definition, the latter means a statement made about the future8. There is a broad class of problems where the differentiation of those two concepts does not make any problems or doubts. Example 7.1: The mathematical model of the enzyme-catalysed reaction developed in Example 5.5 may be used for prediction of the course of reaction, given initial values of the concentrations ½S, ½E, ½ES and ½P, and specific values of the affinity constants k1 , k − 1 and k2 . In Figure 7.1, the results of prediction for: − ½St = 0 = 1.0 μM, ½Et = 0 = 0.1 μM, ½ESt = 0 = 0 μM and ½Pt = 0 = 0 μM − k1 = 1.0 μM − 1 s − 1 , k − 1 = 0.1 s − 1 and k2 = 0.3 s − 1 have been demonstrated.9

1

[P]

Concentrations

0.8 0.6 0.4

[S] [E]

[ES]

0.2 0 0

50

100 t

150

200

Figure 7.1: Course of an enzyme-catalysed reaction described in Example 5.5: k ! ES ! P + E. k1

S+E

2

k −1

There is, however, a broad class of research problems where the use of the concept of prediction is controversial. These are, in particular:

8 from Latin præ- = “before” + dicere = “to say”. 9 S. M. Dunn, A. Constantinides, P. V. Moghe, Numerical Methods in Biomedical Engineering, 2006, pp. 229–232.

118

7 Scientific explanation

– problems whose solution consists in retrodiction, i.e., in characterisation of the past behaviour of an object or phenomenon under study; – problems whose solution consists in identification of a mathematical model not including a variable modelling time; – problems referring to mathematical models not including such a variable.

Example 7.2: Spectrophotometers are instruments for measuring the absorbance of a liquid substance under study, i.e., the change in the spectrum of light passing through this substance. According to the Lambert–Beer law, the absorbance of a solution of a single compound is proportional to its concentration, and the absorbance spectrum of a solution containing J compounds, x Ab ðλÞ, is equal to the linear combination of the normalised absorbance spectra of those compounds x1Ab ðλÞ, ..., xJAb ðλÞ such that: x Ab ðλÞ = c1 x1Ab ðλÞ + ... + cJ xJAb ðλÞ where λ is a scalar real-valued variable modelling wavelength, and c1 , ..., cJ are scalar real-valued variables modelling the concentrations of the corresponding compounds. This model is used for solving two typical problems: – estimation of x Ab ðλÞ on the basis of data representative of c1 , ... , cJ and x1Ab ðλÞ, ... , xJAb ðλÞ; – estimation of c1 , ... , cJ on the basis of data representative of x Ab ðλÞ and x1Ab ðλÞ, ... , xJAb ðλÞ. None of them can be classified as prediction in the strict sense of this term, but both may be viewed as instantiations of explanation.

There is also a broad class of research problems where prediction is possible, but it does not imply any explanation. These are, in particular, the situations where events or phenomena are correlated because they have a common cause, but are not causally related.

Example 7.3: Two symptoms, S1 and S2 , of an infectious disease are causally related to the presence and activity of a pathogenic microbial agent, but neither S1 may be explained by S2 , nor S2 by S1 , even if they appear in a time sequence.

Both explanation- and prediction-type operations play a significant role in systematic application of technoscientific knowledge, especially: – in disciplinary, interdisciplinary and transdisciplinary application of scientific knowledge in science; – in application of scientific knowledge in technology and technological knowledge in science; – in application of technoscientific knowledge outside technoscience.

7.2 Scientific explanation versus scientific prediction

119

Example 7.4: Medicine and medical sciences are the fields where diversified technologies and basic scientific disciplines (such as physics, chemistry and biology) contribute to the common goal: human health and well-being. A typical cycle of medical care includes three non-separable processes: diagnosing, curing and prognosticating. A diagnosis is the result of reasoning about the causes of empirically identified symptoms; so, this is a knowledge-based explanation of those symptoms by their probable causes. A pharmacological, physio-therapeutical or psycho-therapeutical cure is ordained on the basis of predictions concerning the short-term and long-term consequences of its application in a particular case of a patient under treatment. At the same time, the patient is receiving an explanation of the ordained mode of cure application and its possible consequences. At each stage of the therapy, predictions are made concerning the future evolution of the patient’s health condition. Technical means may be employed at each stage of the cycle of medical care, both for acquisition and interpretation of medical data, and for performing certain medical actions such as surgical interventions. The technical infrastructure of medical care may include such sophisticated instruments as scanners for magnetic resonance imaging (MRI) whose principle of operation makes use of the magnetic properties of a hydrogen nucleus (a single proton) present in water molecules, and therefore in all body tissues. Those nuclei are partially aligned by a strong magnetic field and rotated by radio waves; when returning to equilibrium, they emit a radio signal which is detected by antennas, recorded and processed numerically in such a way as to obtain a detailed image of selected body tissues, including soft tissues such as muscles or brain. Since this signal is sensitive to a broad range of influences (nuclear mobility, molecular structure, flow and diffusion), MRI is a very flexible technique that may deliver measures of many structural and functional parameters of the human body, provided corresponding mathematical models are identified and used for developing dedicated algorithms of data processing10. The most important among them are algorithms for reconstruction of the tissue image from the recorded data, and algorithms for interpretation of this image, aimed at extraction of parameters or features being informative from diagnostic point of view. Both categories of those algorithms may be viewed as tools of scientific explanation. If a medical diagnosis relies on information provided by an MRI scanner, then the fate of a patient relies on a chain of scientific explanations, including both medical and technical premises.

Both explanations and predictions are sine qua non conditions of our understanding of nature: “no understanding without explanation”, as it is categorically stated by the American philosopher of science Michael Strevens (*1965)11. Consequently, explanation has a profound impact on learning; moreover, generating explanations can be a more effective mechanism for learning than receiving explanation12.

10 J. Rydell, Advanced MRI Data Processing, Vol. 1140 (of the series Dissertations), Linköping Studies in Science and Technology, Linköpings Universitet, Department of Biomedical Engineering, Linköping 2007. 11 M. Strevens, “No understanding without explanation”, Studies in History and Philosophy of Science, 2013, Vol. 44, pp. 510–515. 12 T. Lombrozo, “Explanation and Abductive Inference”, [in] The Oxford Handbook of Thinking and Reasoning (Eds. K. J. Holyoak, R. G. Morrison), Oxford University Press, Oxford (UK) 2012, pp. 260–276.

120

7 Scientific explanation

7.3 Nomological explanation The nomological explanation has the form of a logically correct reasoning leading from the premises contained in explanans to the explanandum being the conclusion. It is making use of subsumption under a scientific law or a theory in combination with some initial conditions. Formally, the underlying argument uses the rule of deductive inference called universal instantiation, followed by the rules of deductive inference called modus ponens. The elementary version of the corresponding argument has the following form: Premise #: Premise #: Conclusion #: Conclusion #:

∀x : PðxÞ ! QðxÞ PðcÞ PðcÞ ! QðcÞ QðcÞ

[a law or a theory] [an initial condition] [universal instantiation] [modus ponens]

In the 1948 article “Studies in the Logic of Explanation”14, Carl G. Hempel and Paul Oppenheim (1885–1977) postulated the so-called deductive-nomological scheme of scientific explanation. According to this scheme, the explanans must contain at least one scientific law and some testable empirical facts; moreover, it must be true. The explanation consists in this case in deriving the contents of the explanandum from a scientific law, or from a set of laws, or from a system of laws organised in a valid theory or a (semantic or mathematical) model. Example 7.5: One may drop a ball, equipped with a velocimeter, from a point located 1 m above the ground, and ask why it is hitting the ground at the velocity of 4.43m=s. To answer this question, i.e., to explain the result of measurement, one has to introduce a function zðt Þ modelling the position of the ball changing over time modelled with the variable t, as well as the first derivative of this function z′ðt Þ modelling the velocity of the ball. Two values of zðt Þ are already known: zð0Þ = 1 m and zðT Þ = 0 m, but the time instant T , when the ball is reaching the ground, should be determined . According to the law of universal gravitation, the ball is moving under the influence of the gravitation force which remains practically constant during its displacement between zð0Þ and zðT Þ. Consequently, according to the second Newton’s law of motion, it is moving with constant acceleration whose value is g ffi 9.81 m=s2 . Thus, the time dependence of the velocity and position of the ball may be modelled by means of the following equations: ðt z′ðt Þ = z′ð0Þ +

ðt z′′ðτ Þ dτ = 0 −

0

g dτ = − gt 0

13 T. A. F. Kuipers, “Explanation by Specification”, Logique et Analyse, 1986, Vol. 29, No. 116, http:// virthost.vub.ac.be/lnaweb/ojs/index.php/LogiqueEtAnalyse/article/view/1119/925 [2017-04-01]. 14 C. G. Hempel, P. Oppenheim, “Studies in the Logic of Explanation”, Philosophy of Science, 1948, Vol. 15, No. 2, pp. 135–175.

7.3 Nomological explanation

ðt zðt Þ = zð0Þ +

ðt z′ðτ Þ dτ = zð0Þ −

0

gt dτ = zð0Þ −

121

1 2 gt 2

0

The velocity at which the ball is hitting the ground is:   sffiffiffiffiffiffiffiffiffiffiffiffi 1 2 2zð0Þ z′ðT Þ = − gT , where T = argt zð0Þ − gt = 0 = 2 g Thus: z′ðT Þ = −

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2zð0Þg ffi − 4.43 m=s.15

The deductive-nomological scheme of scientific explanation seems to “work” in a majority of technoscientific problems the research practitioners have to deal with on everyday basis. The above example is representative of a broad class of problems which may be formulated as the following question: “Why is such and such system in such and such state at the time instance T?” In research practice, this question is answered by solving the differential equations modelling the system’s behaviour over time for given initial or boundary conditions; the result is the scientific explanation of why it is in that state16. Scientific laws (sometimes called scientific principles) are elementary pieces of knowledge, well-established in a scientific discipline or a group of disciplines. As a rule, they have the form of statements, frequently including mathematical equations, which in an abstract way characterise a large collection of empirical facts. The classical (naïve) realists claim that the scientific laws are discovered rather than invented because they reflect laws of nature. Moreover, they believe that mathematics may be used for their formulation because nature is “mathematical”. Laws are narrower in scope than scientific theories which may comprise several laws. The concept of scientific law was very popular in the period of classical science, and started to lose its importance in the second half of the twentieth century. We still teach and refer to such laws of classical science as: – the law of conservation of energy in all branches of classical physics; – the Kepler’s laws of planetary motion and the Newton’s law of gravitation in astronomy; – the Maupertuis’ principle of least action in classical mechanics; – the Boyle’s law and Gay-Lussac’s law in physics of gas; – the Coulomb’s law, the Ohm’s law and the Kirchhoff’s laws in physics of electricity;

15 inspired by: B. Skow, “Scientific Explanation”, [in] The Oxford Handbook of Philosophy of Science (Ed. P. Humphreys), Oxford University Press, Oxford (UK) – New York (USA) 2016, Chapter 25. 16 ibid.

122

7 Scientific explanation

– the law of refraction in physical optics; – the Beer–Lambert’s law in physics of materials. We are, however, reluctant to speak about Einstein’s laws of special or general relativity. In quantum mechanics, we avoid using the term law: the law-like statements as the Heisenberg’s principle of indeterminacy and the Dirac’s razor are not called laws because they are considered to be fundamental assumptions rather than pieces of knowledge supported by empirical evidence. The formulation of laws in biology has always been problematic, not only because of the limited possibility of mathematisation of biological phenomena17. The Lamarck’s laws of evolution (ousted from the body of biological knowledge by the Darwin’s theory of evolution), the Cuvier’s law of correlation or the Mendel’s laws of inheritance – these are examples of the rare instances when the term law is used in biology. Two features have been traditionally associated with scientific laws, viz., they capture an exceptionless regularity about nature, and they are objective, i.e., independent of us. The practical difficulty related to the first feature consists in differentiation of such regularities from accidental law-like regularities. There seems to be a sort of necessity associated with the regularities captured by scientific laws, while that necessity is lacking in accidental regularities. That necessity seems to be well grasped by counterfactual statements, i.e., conditional (“if-then”) sentences in which the antecedent of the conditional (the “if” part) is hypothetical, e.g., “If I had sold my violin, then I would have gotten money for a new computer.” The law-related regularities remain true under a variety of counterfactual conditions, whereas accidental regularities do not18. Example 7.6: The second Newton’s law of motion correctly describes the movement of metallic balls under a wide variety of counterfactual conditions, regardless of the material they are made of, regardless of the ambient temperature and humidity. In contrast, one can easily imagine various conditions under which the regularity of my everyday behaviour (e.g., coming to the office at 7:00 am) would no longer happen.

Testing laws by means of counterfactual statements is helpful in some situation, and problematic in others, as discussed in the relevant literature19. The second feature traditionally associated with scientific laws – their objectivity – is also problematic due to the fact that counterfactual statements, used for testing, are selected

17 P. K. Dhar, A. Giuliani, “Laws of Biology: Why So Few?”, Systems and Synthetic Biology 2010, No. 4, pp. 7–13. 18 R. DeWitt, “Philosophies of the Sciences: A Guide”, 2010. 19 e.g. ibid., pp. 30–32.

7.3 Nomological explanation

123

by the researchers; there is no universal criterion for their interest-independent choice.20 Some important scientific principles are called conservation laws (or laws of conservation). They state that certain measurable quantities, characterising physical objects or phenomena, do not change in the course of time within an isolated physical system. Most known among such laws refer to conservation of energy, mass, momentum and electrical charge: – Conservation of energy means that energy can be neither created nor destroyed, although it can be changed from one form (mechanical, kinetic, chemical, etc.) into another; in an isolated system the sum of all forms of energy therefore remains constant. – Conservation of mass means that matter can be neither created nor destroyed. In light of the special theory of relativity, mass is not a conserved quantity, but the conversion of rest mass into other forms of mass-energy is so small that, except in nuclear reactions, rest mass may be thought of as conserved. – Conservation of linear momentum expresses the fact that a body or system of bodies in motion retains its total momentum (i.e. the product of mass and vector velocity), unless an external force is applied to it. – Conservation of angular momentum of rotating bodies expresses the fact that a body or system that is rotating continues to rotate at the same rate unless a twisting force (a torque), is applied to it. The angular momentum of a material point is defined as the product of its mass, its distance from the axis of rotation and the component of its velocity perpendicular to the line from the axis. – Conservation of electrical charge means that the total amount of electric charge in an isolated system does not change with time. Even at the subatomic level, where charged particles can be created, the total amount of charge always remains constant because they are created in pairs with equal positive and negative charge. The conservation laws are used by researchers – in such fields as physics, chemistry, biology, geology or engineering – for predicting the macroscopic behaviour of systems under study without having to consider the microscopic details of the course of the corresponding physical or chemical processes. Moreover, the conservation laws are used by research practitioners as tools for non-causal explanation following the pattern of explanation introduced by Carl G. Hempel. It should be noted that similar role in research practice can be attributed to numerous principles of physics or chemistry, which may be derived from or justified by conservation laws; examples are:

20 ibid., p. 32.

124

7 Scientific explanation

– the principle of least time (called also Fermat’s principle), associated with the name of Pierre de Fermat (1607–1665); – the principle of least action (called also principle of stationary action), associated with the names of the French mathematician Pierre L. M. de Maupertuis (1698–1759) and the Irish mathematician William R. Hamilton (1805–1865); – various principles of symmetry in quantum mechanics21. The deductive-nomological explanation is called deductive because of the deductive nature of the reasoning used, and nomological22 – because of the involvement of laws. It is still referred to in research practice despite the diminishing “popularity” of scientific laws and growing awareness of uncertainties omnipresent in research process. According to Carl G. Hempel, whose name is associated with the formalisation of the deductive-nomological scheme of explanation, to explain a phenomenon or an event is to show that, in view of the circumstances and the corresponding scientific laws, this phenomenon or event could be expected. Within two decades, following the publication of the article “Studies in the Logic of Explanation”23, Carl G. Hempel recognised that not all explanations, applied and recognised by scientists, are of the deductive-nomological nature – some refer to probabilistic or statistical patterns of reasoning. In the 1962 article “Deductive-Nomological vs. Statistical Explanation”24 and in the 1965 book Aspects of Scientific Explanation. . .25, he proposed two types of statistical explanations. The first of them, the inductive-statistical, explains particular occurrences of explanandum by subsuming them under statistical laws, much as deductive-nomological explanations subsume particular events under universal laws. There is, however, a crucial difference: deductive-nomological explanations subsume the events to be explained deductively, while inductive-statistical explanations subsume them to be explained inductively26. In a deductive-nomological explanation, the event to be explained is deductively certain, given the explanans (including the universal laws); in an inductive-statistical explanation, the event to be explained has high inductive probability relative to the explanans (including the statistical laws). The second type of statistical explanation, the deductive-statistical, is to explain general regularities by means of deduction from 21 cf. the Wikipedia article “Symmetry in Quantum Mechanics” available at https://en.wikipedia. org/wiki/Symmetry_in_quantum_mechanics [2018-05-05]. 22 from Greek nomos = “law”. 23 C. G. Hempel, P. Oppenheim, “Studies in the Logic of Explanation”, Philosophy of Science, 1948, Vol. 15, No. 2, pp. 135–175. 24 C. G. Hempel, “Deductive-Nomological versus Statistical Explanation”, [in] Minnesota Studies in the Philosophy of Science (Eds. H. Feigl, G. Maxwell), Vol. III, University of Minnesota Press, Minneapolis 1962. 25 C. G. Hempel, Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, The Free Press, New York 1965. 26 R. DeWitt, “Philosophies of the Sciences: A Guide”, 2010, p. 26.

7.3 Nomological explanation

125

more comprehensive statistical laws27. It involves the deduction of a statement in the form of a statistical law using indispensably at least one law or theoretical principle of statistical nature. The deduction referring to the theory of probability makes it possible to calculate probabilities characterising explanandum on the basis of the probabilities characterising explanans, which have been empirically ascertained. A summary of the above-defined types of scientific explanations is provided in Table 7.1 and their exemplification in Example 7.7.

Table 7.1: Three types of scientific explanation proposed by Carl G. Hempel. Explananda Laws Universal laws Statistical laws

Particular facts

General regularities

deductive-nomological inductive-statistical

deductive-nomological deductive-statistical

Source: W. C. Salmon, Four Decades of Scientific Explanation, 1989, p. 9.

Example 7.7: The following reasoning is an example of a deductive-nomological explanation when applied to a particular fact: – The cells of an infant A have three copies of the chromosome 21. – Any infant whose cells have three copies of chromosome 21 has Down syndrome. – Thus, the infant A has Down syndrome. The following reasoning is an example of an inductive-statistical explanation: – The brain of a person B has been deprived of oxygen for five continuous minutes. – Almost anyone whose brain is deprived of oxygen for five continuous minutes is subject to brain damage. – Thus, most probably, the person B has brain damage.28 The following reasoning is an example of a deductive-statistical explanation: – The regularity of radioactive decay of the carbon isotope C14 may be used for determination of the age of an archaeological finding. – The investigated finding is compared with a similar new object in terms of the content of atoms of that isotope in order to determine the fraction of C14 which has broken down. – Then, using the (statistical) law defining the half-life of C14 (5730 years), the most probable time interval, necessary for the breakdown of that part of the C14 atoms, is estimated.29

27 W. C. Salmon, Four Decades of Scientific Explanation, 1989, p. 9. 28 based on: G. R. Mayes, “Theories of Explanation”, Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), http://www.iep.utm.edu/explanat/ [2017-03-03]. 29 A. Grobler, Metodologia nauk, Wyd. “Aureus”, Kraków 2006, p. 104.

126

7 Scientific explanation

Various versions of nomological scheme of scientific explanation account for a large part of explanation in physical sciences. They have turned out to be, however, inadequate for some functional explanations in life sciences. In response to those difficulties, the Dutch philosopher of science Theodore A. F. Kuipers (*1947) developed in 1986 the scheme of explanation by specification30 which, however, has not received much appreciations in other domains of science, and therefore will be not presented here. A summary of the criticism concerning the inductive-statistical pattern of explanation, illustrated with counterexamples, may be found in the 1992 article by Wesley C. Salmon31. The nomological schemes of scientific explanation were notoriously criticised for the lack of an explicit reference to causal relationships. This criticism was probably the main premise of the proposition made by the Australian polymath and philosopher Michael J. Scriven (*1928) that explanation should consist in identification of the causes of the phenomenon or event under study, rather than in reasoning32. There is a lot of situations when an evidence may be explained without any reference to scientific laws (e.g. an ink stain on the carpet may be explained by an accidental overturn of an inkwell), and when those laws may be used – if necessary – for justifying the explanation. This conclusion applies also to random causes which provoke certain phenomena or events not exceptionless, but with a statistical regularity.

7.4 Causal explanation The causal explanation is an explanation of why something happened by indicating its causes. A causal relation between two events C and E exists if the occurrence of C causes E. The event C is called cause and the event E is called effect. A correlation between two events does not imply causation. On the other hand, if there is a causal relationship between two events, they must be correlated. An important question concerning this type of explanation is whether there can be singular causal explanation which does not make reference to any scientific laws33. The causal explanation is of particular importance for empirical sciences, although the concept of cause is problematic from philosophical point of view, as already discussed by David Hume in his books A Treatise of Human Nature (1738– 1740) and An Enquiry Concerning Human Understanding (1748). David Hume argued

30 T. A. F. Kuipers, “Explanation by Specification”, 1986. 31 W. C. Salmon, “Scientific Explanation”, [in] Introduction to the Philosophy of Science: A Text by Members of the Department of the History and Philosophy of Science of the University of Pittsburgh (Eds. W.H. Salmon, et al.), Prentice Hall, Englewood Cliffs (USA) 1992, pp. 7–41. 32 M. Scriven, “Causation as Explanation”, Noûs, 1975, Vol. 9, No. 1, pp. 3–16. 33 S. Psillos, Philosophy of Science A–Z, 2007, p. 86.

7.4 Causal explanation

127

that we do not observe any causation; what we literally observe are events followed reliably by other events. So, if causation is not something we observe in nature, then the concept of causation must come from us, i.e., it must be generated by our minds34. The concept of cause-effect relationship is, moreover, problematic in quantum physics35. By no means, however, those objections diminish the importance of causal explanations in medical diagnostics or methodology of engineering design. They have, however, inspired important philosophical considerations aimed at working out formal criteria, not appealing to causation, which any valid scientific explanation should meet. One such criterion was put forward by Carl G. Hempel within his deductive-nomological scheme of scientific explanation, e.g., the possibility of subsuming investigated phenomena or events under a general law by means of deductive argumentation, in particular – by a causal law. The following ordinary-language justification was provided two decades ago by the American philosopher of science Stuart S. Glennan: “When I claim that some event causes another event, say that my turning the key causes my car to start, I do not believe this simply because I have routinely observed that turning the key is followed by the engine starting. I believe this because I believe that there is a mechanism that connects key-turning to engine-starting”36. In the same article he claimed that laws are either mechanically explicable or fundamental; the latter are those which represent facts about which no further explanation is possible, e.g., the law of universal gravitation, the Maxwell’s equations for electromagnetic fields and the Schrödinger’s equations for quantum-mechanical systems. According to the Canadian philosopher of science Richard Johns, an explanation is valid if the observed effect can be inferred from the purported cause; it is “a story” about what caused an object to exist or an event to occur37. Causation is a physical process whereas inference is a mental process. There is, consequently, a fundamental difference between the contents of the statement “The physical event C produced the physical event E” and of the statement “From knowing that C occurred, one can infer that E also occurred.” If an effect can be inferred, with certainty, from a sufficiently detailed description of its cause, then the cause C is said to be deterministic. According to the classical understanding of causality, the effect cannot precede the cause in time. So, in the limit case, the effect appears simultaneously with the

34 R. DeWitt, “Philosophies of the Sciences: A Guide”, 2010, p. 27. 35 J. Pienaar, “Viewpoint: Causality in the Quantum World”, Physics, 2017, Vol. 10, No. 86, https:// physics.aps.org/articles/v10/86 [2018-07-22]. 36 S. S. Glennan, “Mechanisms and the Nature of Causation”, Erkenntnis, 1996, Vol. 44, pp. 49–71. 37 R. Johns, “Inference to the Best Explanation”, Manuscript published on the internet, 2008, http://faculty.arts.ubc.ca/rjohns/ibe.pdf [2017-03-21].

128

7 Scientific explanation

cause, never before. The retrocausality (also called backward causation) is a hypothetical phenomenon that is allowing an effect to occur before its cause. This is mainly a subject of science-fiction literature and thought experiments in philosophy, addressing the question whether the future can affect the present and whether the present can affect the past. This phenomenon is also debated in the context of quantum physics, but it will be excluded from consideration here. Example 7.8: Let’s return to the low-pass electrical filter considered in Example 5.8. When asking for explanations related to this device, one may mean their various kinds: both why-type and howtype explanations, both of them referring to the semantic and mathematical models of the filter. The why-type explanation referring to the mathematical model developed in Example 5.8: Tu′2 ðt Þ = − u2 ðt Þ + u1 ðt Þ with T ≡ RC may, e.g., answer the question why the measurement data, representative of the output voltage u2 ðt Þ, follow the pattern shown in Figure 7.2 with red lines. One may use for this purpose the equation resulting from inversion of the above model with respect to the input voltage u1 ðt Þ: u1 ðt Þ = Tu′2 ðt Þ + u2 ðt Þ for t > 0 This equation may be used in various ways for obtaining a numerical representation of u1 ðt Þ on the   ~2, N representative of u2 ðt Þ, corrupted with zero-mean ran~2, 1 , ... , u basis of measurement data u dom errors whose standard deviation is σ. The estimates of u1 ðt Þ, shown in Figure 7.2, have been obtained using the central-difference method for numerical differentiation of u2 ðt Þ. The results obtained for σ = 0 (both for N = 100 and N = 200) enable one to guess that the plausible explanation for the measurement data is an abrupt change of the voltage u1 ðt Þ from 0 to 1 V. The above explanation would be much more problematic if the estimate of u1 ðt Þ were obtained   ~2, N corrupted with errors whose standard deviation ~2, 1 , ..., u on the basis of measurement data u is σ = 0.01, especially for N = 200. When looking at the right column of Figure 7.2, one may hypothesise that the input voltage is not constant for t > 0, but rather oscillating or random. Many signal shapes might fit the results of estimation and the choice of a single one would depend on the available a priori knowledge about the class of signals the course of voltage u1 ðt Þ belongs to,   ~ 2, N are corrupted ~2, 1 , ... , u and about the properties of random errors the measurement data u with. Thus, the explanation in this case would be based on abductive reasoning.

Within the most elementary pattern of causality, called linear causality, a single effect is directly linked to a single cause which precedes the effect in time. More complex patterns of causality – such as domino causality, cyclic causality, relational causality and mutual causality – are superpositions of this elementary one38. The first of those patterns, domino causality, consists in sequential unfolding of effects over time. It can be branching where there is more than one effect of a consecutive cause (and these may go on to have multiple effects). Within the pattern of cyclic causality, a single cause is producing a sequence of effects the last of which is

38 “Six Causal Patterns”, [in] Causal Patterns in Science, Harvard Graduate School of Education, 2008, https://www.cfa.harvard.edu/smg/Website/UCP/pdfs/SixCausalPatterns.pdf [2017-07-08].

129

7.4 Causal explanation

1

1

0.5

0

0

1

2

3

4

1.5

1

1

0.5

2

3 Time [T]

2

4

5 (d)

3

4

5

3

4

5

0.5

0 1

1

Time [T]

1.5

0

0

(b)

Time [T]

0 (c)

0.5

0

5

Voltage [V]

N = 200

(a)

Voltage [V]

σ = 0.01 1.5

Voltage [V]

N = 100

Voltage [V]

σ=0 1.5

0

1

2 Time [T]

~2, 1 , ..., u ~ 2, N g representative of u2 ðt Þ (in red) by Figure 7.2: Explanation of measurement data fu means of the estimates of u1 ðt Þ, obtained on the basis of those data (in blue), and compared with the exact values of u1 ðt Þ (in black).

impacting the initial cause. Thus, this is a repeating pattern including a feedback loop. The pattern of relational causality is recognised when the confrontation of two causes – linked by a relation of difference, balance, equivalence or similarity – is producing an effect; if this relation is preserved, then the effect does not change. A cause and an effect follow the pattern of mutual causality if they impact each other. In light of the assumption that the effect cannot precede the cause in time, the most intriguing patterns of causality are those which include loops of serial causal links, such as the elementary cyclic structure shown in Figure 7.3. They generate behaviours of physical systems which cannot be directly derived from behaviours of such systems in the absence of those loops. As the authors of the 2008 book Feedback Systems have rightly pointed out: Simple causal reasoning about a feedback system is difficult because the first system influences the second and the second system influences the first, leading to a circular

130

7 Scientific explanation

argument. This makes reasoning based on cause and effect tricky, and it is necessary to analyse the system as a whole. A consequence of this is that the behavior of feedback systems is often counterintuitive, and it is therefore necessary to resort to formal methods to understand them.39

x

+

Causal link #1

y

Causal link #2 Figure 7.3: Feedback loop.

Example 7.9: In the simplest case, where both causal links in Figure 7.3 may be adequately modelled using linear functions with the slopes k1 and k2 , the dependence of the quantity modelled with the variable y on the quantity modelled with the variable x may be adequately modelled with the following linear equation: y = k  x where k ≡

k1 1 − k1 k2

The amplification of the fed-back system, jkj, is greater than jk1 j if jk2 j 2 ð0, 2=jk1 jÞ. This result is counter-intuitive since one could expect that the stronger positive feedback should always yield the greater amplification.

The concept of feedback plays a key role in the vocabulary of cybernetics whose most concise and informative definition was provided in the title of the 1948 book Cybernetics or Control and Communication in the Animal and the Machine, authored by the American mathematician and philosopher Norbert Wiener (1894–1964) – the “father of cybernetics”. This title is indicating, in an implicit way, the omnipresence of feedback in nature and technology. Already in the eighteenth century, the Scottish engineer James Watt (1736–1819) invented the centrifugal governor to regulate the speed of steam engines, which was an early implementation of the negative feedback. A mathematical model, explaining its principle of operation, was provided a century later by James Clerk Maxwell. Since then, the negative feedback is used in control of mechanical, electrical, thermal, chemical and other systems – the control aimed at increasing the stability or accuracy of those systems by correcting or reducing the influence of unwanted changes.

39 K. J. Aström, R. M. Murray, Feedback Systems: An Introduction for Scientists and Engineers, University Press, Princeton (USA) 2008, p. 1.

7.4 Causal explanation

131

Example 7.10: An example, illustrating the effectiveness of the negative feedback in control engineering is shown in Figure 7.4a. The positive feedback is used in electronics for designing generators of periodic signals: it reinforces the input signal to the point where the output signal of a device oscillates between its maximum and minimum possible states. The detection of a positive feedback is a way to explain typical dysfunctions, such as whistles and booing, of acoustic amplifiers.

1

x(t) y(t) yf(t)

Magnitude

0.5

0

–0.5

0

1

2

3

4

5

6

7

Time

(a) 0.25 Feedback off Feedback on

0.2

y

0.15

0.1

0.05

0 (b)

0

0.2

0.4

0.6

0.8

1

x

Figure 7.4: Application of the negative feedback: (a) in a control system (desirable response of the controlled system – in black, its response without feedback – in blue, its response with feedback – in red); (b) in an intelligent sensor (static characteristic without feedback – in blue, static characteristic with feedback – in red). In measurement engineering, the negative feedback is used for linearisation of the static input-output characteristics of sensors. An example is shown in Figure 7.4b where a non-monotonic inputoutput characteristic is transformed in an almost linear dependence between a non-electrical input quantity x and the output voltage y.

132

7 Scientific explanation

In psychology, violent blushing attacks are explained by referring to the positive feedback called shame loop. This is an affliction of people who blush easily: when they realise that they are blushing, they become even more embarrassed, which leads to further blushing, etc.40

According to the so-called dependence approach, causation is a kind of robust dependence of a discrete event and E on another discrete event C. Three types of dependence are of particular importance in this respect: – nomological dependence (C and E fall under a deterministic law of nature), – counterfactual dependence (E would not have happened if C had not happened), – probabilistic dependence (C raises the probability of E). The causation defined in terms of nomological dependence means that: – C is spatiotemporally contiguous to E, – E succeeds C in time, – all events of the same type as C are regularly followed by events of the same type as E. The latter condition excludes singular causation which is defended by some philosophers41. In the simplest – but very restrictive and therefore impractical case – the regularity requires the cause C to be a necessary and sufficient condition of the effect E. More inclusive understanding of the regularity requires of C to be an insufficient but necessary part of an unnecessary but sufficient condition for E (the so-called INUS condition). The causation is defined in terms of the counterfactual dependence of the effect on the cause if the cause C is counterfactually necessary for the effect E, which means that E cannot appear without appearance of C. The probabilistic dependence of E on C means that:  – the probability of E given C is greater than the probability of E given C, ′ – there is no other factor C such that the probability of E given C and C′ is equal  and C′. to the probability of E given C A problem faced by all theories of probabilistic causation is that there are circumstances in which C causes E while lowering the probability of its appearance42.

40 J. Logan, “The Shame of Psychology”, The UCSB Current, June 23, 2015, http://www.news.ucsb. edu/2015/015523/shame-psychology [2018-07-22]. 41 cf. M. J. Garcia-Encinas, “On Singular Causality: A Definition and Defence”, Manuscript published on the internet, 2009, http://philsci-archive.pitt.edu/5246/ [2017-07-11]. 42 S. Psillos, Philosophy of Science A–Z, 2007, p. 35.

7.4 Causal explanation

133

The occurrence of E cannot be inferred with certainty from any description of the cause C if the latter is random. There are three conditions for a line of reasoning LoR to be a valid explanation of a piece of evidence E: – The LoR makes a claim about something that caused E; it may describe the nature of a known cause, or it may posit the existence of a previously unknown cause (the causation condition). – E can be inferred from the LoR to a high degree of likelihood, taking into account background knowledge which may include the results of other experiments that have been used for testing the LoR (the inference condition). – The LoR is relatively likely to be true, compared to competing lines of reasoning, given background knowledge (the plausibility condition). According to the so-called production approach, the statement “C causes E” means that something in the cause C produces the effect E, i.e., there is “a mechanism” that links the cause C and the effect E in the sense that something (a property, a physical quantity, etc.) is transferred from the cause to the effect. According to mechanistic theories of causation: C causes an event E if and only if there is a causal process that connects C and E 43. The term mechanism44 emerged already in the seventeenth century, and it was closely related to mechanics being at that time the principal branch of physics. Therefore, the mechanistic approach to scientific modelling and explanation, which become prominent around the turn of the twenty-first century, started to be called new mechanical philosophy (or, for brevity, new mechanism). It emerged as a framework for thinking about the philosophical aspects of many areas of science, but especially of biology, neuroscience and psychology. There is no unique, generally agreed, definition of the concept of mechanism45. The following one seems to be quite representative for their whole spectrum since it refers to the key elements which appear in others: “A mechanism is a structure performing a function in virtue of its component parts, component operations, and their organization. The orchestrated functioning of the mechanism is responsible for one or more phenomena”46. The phenomenon is understood here as the behaviour of the mechanism as a whole: the mechanism is producing, underlying, or maintaining the phenomenon. Realists assume that mechanisms are entities that

43 ibid. 44 from Greek mekhane = “machine” or “instrument” or “device”. 45 C. Craver, J. Tabery, “Mechanisms in Science”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Spring 2017 Edition, https://plato.stanford.edu/archives/spr2017/entries/science-mech anisms [2017-07-06]. 46 W. Bechtel, A. Abrahamsen, “Explanation: A Mechanistic Alternative”, 2005.

134

7 Scientific explanation

exist, i.e., are part of reality; for instrumentalists, mechanisms are models of particular type47. Let’s stick to the latter interpretation. Example 7.11: In physiology of neural cells, an action potential occurs when the membrane potential of a specific axon location rapidly rises and falls; the abridged description of the ionic mechanism behind this phenomenon is the following: – The generation and propagation of an action potential requires sodium influx via voltage-dependent sodium channels that drives the upstroke of the action potential. – This positive feedback cycle is terminated by sodium channel inactivation that shuts down the channel at depolarised membrane potentials. – The reduced sodium influx along with increased potassium efflux permits rapid action potential repolarisation. – The enhanced potassium efflux is mediated by the activity of both voltage-dependent and voltage-independent potassium channels. – The recovery of sodium channels from inactivation and the slow closing of potassium channels following the action potential determine the refractory period being a period of increased action potential threshold. – Thus, the kinetics of sodium and potassium channel gating determine not only the action potential shape and duration but also the threshold for action potential generation.48

Since any mechanism is a semantic, mathematical or mixed model of a piece of reality, a critical element of the procedure for its identification is the choice of its structure (cf. the procedure of mathematical modelling described in Section 5.2). This is a heuristic rather than algorithmic operation which must be performed by a researcher taking into account the scope of phenomena to be explained by the mechanism. Various categories of entities may be part of mechanisms, including lower-order mechanisms. Example 7.12: Let’s consider two of the many subsystems of the human body, viz. the cardiovascular and respiratory systems. There are various mechanisms responsible for their functioning: for pumping blood, for inhaling oxygen and exhaling carbon-dioxide. These two subsystems interact in such a way that they must be considered as a composite. From anatomical point of view, the cardiovascular subsystem may be decomposed into the heart, veins, arteries, capillaries etc., and the respiratory system – into the lungs, diaphragm, windpipe, mouth etc. These two subsystems interact due to some integrating parts, such as veins and arteries running through the various parts of the respiratory subsystem.49

There are, however, certain entities which, at least according to Stuart S. Glennan, should not be allowed to be parts of mechanisms: these are entities lacking

47 H. Ruonavaara, “Deconstructing Explanation by Mechanism”, Sociological Research Online, 2012, Vol. 17, No. 2, pp. 7.1–7.16. 48 K. S. Elmslie, “Action Potential: Ionic Mechanisms”, eLS, February 15, 2010, https://onlineli brary.wiley.com/doi/pdf/10.1002/9780470015902.a0000002.pub2 [2018-07-22]. 49 S. S. Glennan, “Mechanisms and the Nature of Causation”, 1996.

7.4 Causal explanation

135

“robustness and reality apart from their place within that mechanism”50. If such a restriction is imposed on the valid parts of a mechanism, then mechanical explanation of the behaviour of the electromagnetic field is impossible; at the same time, the electromagnetic field is an important part of many mechanisms, from the mechanism of particle acceleration to the mechanism which produces the Aurora Borealis51. The protagonists of the new mechanical philosophy disagree about the understanding of the cause in the causal mechanism, as well as about its definition52, interpretation and description53. The diversity of views in this respect has been growing rather than diminishing over recent decades when this paradigm of modelling and explanation has been extended on such scientific disciplines as cell biology, cognitive science, neuroeconomics, organic chemistry, astrophysics, behaviour genetics, phylogenetics and experimental sociology. There is, however, a “common denominator” of various options: the focus on characterising or describing the ontic structures to which explanatory schemes (including arguments) must refer if they are to count as genuinely explanatory. Example 7.13: According to the mechanical paradigm of explanation, a rainbow is explained by situating that phenomenon in the causal structure of the world, i.e., by an account of how it is produced by such entities as rain drops and eyeballs, the entities having such properties as shapes and refractive indices which causally interact with light propagating from Sun54.

It should be stressed that not all models are explanatory; some of them are designed only for making accurate predictions, for summarising experimental data or for designing experiments. Example 7.14: One can predict the sunrise using a model of the circadian rhythms of a rooster, but the behaviour of roosters does not explain the sunrise; one can predict sunny weather on the basis of a barometer reading, but the behaviour of the barometer does not explain the sunny weather55. More such examples may be found in the 1992 article authored by Wesley C. Salmon56.

The explanatory power is a constitutive quality of the models which describe the mechanism responsible for a phenomenon under study, i.e., the mechanism that

50 ibid. 51 ibid. 52 more than 20 definitions are mentioned in: H. Ruonavaara, “Deconstructing Explanation by Mechanism”, 2012. 53 C. Craver, J. Tabery, “Mechanisms in Science”, 2017. 54 ibid. 55 C. F. Craver, “When Mechanistic Models Explain”, Synthese, 2006, Vol. 153, pp. 355–376. 56 W. C. Salmon, “Scientific Explanation”, 1992.

136

7 Scientific explanation

explains its diverse features57. The commitment to mechanistic explanation is not related to any particular form of models, but rather to what such models must represent – causal and mechanistic structures. In particular: – dynamical models are used for characterising temporal aspects of a mechanism; – network-type models are used for characterising patterns of connectivity, regardless of what units are connected and regardless of what kinds of connections are of special interest. Those models are considered to be explanatory if they represent the causal structures that produce, underlie or maintain the phenomenon under study58. None of the methodologies of causal explanation, presented in this section, can be treated as the only valid option, generally approved by philosophers of science and research practitioners. None of them can be always recommended as an alternative for nomological methodologies of explanation: there is a considerable number of problems, even in classical physics, whose causal explanation (or explanation in the language of causality) is impossible, unnecessary or artificial. Among them, the problems whose scientific explanations refer to the laws of conservation are of particular importance for teaching, research and everyday understanding of nature; more examples may be found in the 2013 Ph.D. thesis authored by Mark Pexton59.

7.5 Computer-aided explanation Computer-aided explanation is not an additional paradigm of explanation, alternative with respect to nomological and causal paradigms of explanation, but their technical implementation which started to develop with the advent of computers being universal programmable devices for storing and processing of both numerical and symbolic data; the latter including intervals, sets, categories etc. Initially, the role of computers was limited to numerical preprocessing of empirical data, necessary for explanation. Step by step, this role has been extended on some elements of reasoning, causal or nomological, being a core part of scientific explanation. In the 1980s, when the field of artificial intelligence matured, the philosophers of science started to speak about computational explanation, having in mind a fully automatic explanation performed by computers programmed with appropriate “intelligent” algorithms. The automation of deductive reasoning, i.e., programming of logical rules

57 D. M. Kaplan, “Explanation and Description in Computational Neuroscience”, Synthese, 2011, Vol. 183, pp. 339–373. 58 C. Craver, J. Tabery, “Mechanisms in Science”, 2017. 59 M. Pexton, Non-Causal Explanation in Science Models and Modalities: A Manipulationist Account, Ph.D. Thesis, School of Philosophy, Religion & the History of Science, University of Leeds, Leeds 2013.

7.5 Computer-aided explanation

137

of inference, is the simplest part of this enterprise; much more sophisticated is implementation of inductive and abductive reasoning. Learning by examples is the everyday prototype of inductive reasoning; that’s why machine learning (cf. Section 5.4), inspired by the outcomes of cognitive science, is the driving force of the advancement of automation of inductive reasoning. The abductive reasoning is distinctive by its methodological uncertainty related to the need of selecting the best conclusion (among a number of hypothetical conclusions) taking into account available a priori information about the problem under study and a priori agreed, application-specific, criteria of evaluation. The machine learning tools turn out to be useful also in this case since they may help in reducing that uncertainty. For solving a problem by means of a computer, an algorithm is needed, i.e., a sequence of instructions that should be executed to transform problem-related data into a desirable result. For some problems – e.g., for separating spam emails from regular emails – no algorithm is available because there is no closed-form definition of spam emails. There is available, however, their ostensive definition, e.g., a considerable amount of emails classified as spam. They may be used, together with regular emails, for training an adaptable computer program to differentiate between those two categories of emails. The machine-learning tools are applicable in the situation when one can believe that there is a process that explains the recorded data, although the details of that process remain unknown. In such a situation, a model of the process may be identified, which does not explain everything, but is trained to disclose some patterns or regularities in the data, useful for making predictions and providing explanations, i.e., for answering why- and how-questions. The applications of such models are abundant: – In banking, the financial data from the past operations are used for building models to be applied for analysis of credit applications and fraud detection. – In manufacturing, learning models are used for optimisation, control, and troubleshooting. – In medicine, learning programs support diagnostic procedures. – In telecommunications, call patterns are analysed for network optimisation and service improvement. – In physics, astronomy or biology, large amounts of data are analysed by learning supercomputers. – In web search engines, learning algorithms are applied for web crawling and indexing.60 Many researchers and designers now recognise that, even in applications where the white-box models are identifiable, it can be far easier to develop a black-box model

60 E. Alpaydin, Introduction to Machine Learning, The MIT Press, Cambridge (USA) – London (UK) 2010, pp. 1–4.

138

7 Scientific explanation

by training it with examples of desired input-output behaviour than to program it manually by anticipating the desired response for all possible inputs61. During the last 50 years, numerous canonical structures of learning models have been put forward, studied and tested in diverse applications. Artificial neural networks (ANNs) are probably the most popular among them. They imitate, in some way, the human-brain processing of information. An ANN is a set of interconnected artificial neurons organised in layers; each neuron has weighted inputs, transfer function and a single output. The behaviour of an ANN is determined by its architecture, by the class of transfer functions of the neurons it is composed of, and by the learning rules. Learning of an ANN consists in adjusting the weights of neurons’ inputs, i.e., in their optimisation until the error of ANN-based predictions reaches the specified level of accuracy. Once the network is trained and tested, it can be given new input information to predict the output. The ANNs are universal approximators of non-linear relationships which are frequently encountered in engineering and research practice. They do not require the knowledge of data sources, but they require large training sets of data to adjust numerous weights.

Example 7.15: In Example 7.8, a white-box mathematical model of a low-pass electrical filter and the data representative of its output were used for making inference about its input voltage. As shown there, these inverse inferences from the data to their source may be quite simple and unambiguous provided the random errors corrupting the data are negligible. Otherwise, the blackbox approach, referring to machine-learning tools, might be a better option. It consists in identification of the input-output mathematical model of that circuit on the basis of discrete samples of the voltages u1 ðt Þ and u2 ðt Þ, without any assumptions concerning the internal structure and complexity of the modelled circuit. This approach may be easily generalised on any electrical network composed of capacitors and resistors (RC circuit). If the number of capacitors is K, then the white-box model of the circuit contains K first-order ordinary differential equations; so, its complexity grows in proportion to K. The complexity of a corresponding ML black-box model may be much less dependent on K. In Figure 7.5, the results of inverse inference, computed for three RC circuits (K = 1, 10, 100), are shown; they have been obtained by means of a simple Time Series NARX Feedback Neural Network62 whose structural parameters are as follows: LN – the number of neurons in the hidden layer, LDX – the number of delayed samples of u2 ðt Þ and LDY – the number of delayed samples of u1 ðt Þ. The data for this numerical experiment were generated in the following way: – The time axis has been normalised to the largest time constant of the modelled circuits – T , and the normalised sampling interval has been set to 0.005.   ~ 2, 1 , ..., u ~2, N , have been cor– The equidistant samples of u2 ðt Þ, organised in a sequence u rupted using zero-mean normally distributed errors with the standard deviation set to 0.01. – The corresponding samples of u1 ðt Þ have remained error free.

61 M. I. Jordan, T. M. Mitchell, “Machine Learning: Trends, Perspectives, and Prospects”, 2015. 62 from MATLAB Neural Network Toolbox whose documentation is available at https://www.math works.com/help/nnet/?requestedDomain=www.mathworks.com [2017-07-15].

139

7.5 Computer-aided explanation

(a)

K = 1, LN = 12, LDX = 6, LDY = 0

(b) 1.5

1.5

0.5

Voltage [V]

Voltage [V]

1

0 –0.5

1

0.5

–1 –1.5 0

5

10

15

20

0

25

0

1

Time [T]

4

5

K = 100, LN = 9, LDX = 6, LDY = 0

(d)

1.5

1.5

1

1

Voltage [V]

Voltage [V]

3

Time [T]

K = 10, LN = 11, LDX = 6, LDY = 0

(c)

2

0.5

0

0.5

0 0

1

2

3

Time [T]

4

5

0

1

2

3

4

5

Time [T]

~ 2, 1 , ... , u ~2, N g representative of u2 ðt Þ (in red) by Figure 7.5: Explanation of measurement data fu means of the estimates of u1 ðt Þ, obtained on the basis of those data (in blue), and compared with the exact values of u1 ðt Þ (in black).

The data used for training the neural network are shown in Figure 7.5a, while other parts of Figure   ~2, 1 , ... , u ~ 2, N . The 7.5 display the results of estimation of step-like u1 ðt Þ on the basis of the data u results of testing shown in Figure 7.5b are significantly better than the corresponding results in Figure 7.2b. This is due to certain affinity of the data used for training of the neural network and for its testing: both sets of data represent voltage courses having similar morphological features (rapid changes, flat parts); the random errors in both sets of the data follow the same distributions.

In the case of a large-scale integrated circuit, composed of millions of unknown components and millions of their interconnections, when internal nodes of that circuit are not accessible, the use of white-box models for explanation would be problematic. Then the use of black-box learning models is a solution. ANNs are the most studied and applied canonical structures among them. Not less important from mathematical point of view, although less

140

7 Scientific explanation

widespread in research and engineering practice, are Bayesian networks, Markov network brains, support vector machines, multivariate adaptive regression splines and wavelet networks63. The full diversity of the learning structures is shown on the map of machine learning algorithms which may be found on the internet64. The learning machines are transdisciplinary tools of modelling, prediction and explanation. They are applicable under minimum requirements concerning a priori information on the object under study. The price for this convenience is, however, important: very high requirements concerning the quantity and diversity of training data. In many domains of technical and medical sciences, a trade-off solution is possible, such as the use of black-box models subject to significant constraints derived from a priori information about the object under modelling. Example 7.16: A broad class of physical objects and phenomena may be modelled using a convolution-type integral equation of the form: ðt y ðt Þ = gðt Þ*x ðt Þ ≡

gðt − τ Þx ðτ Þ dτ for t > 0 0

where x ðt Þ, gðt Þ and y ðt Þ are real-valued functions modelling – respectively – the input signal, the output signal and the impulse response of the object under study; * is the operator of convolution. Three types of problems are associated with this model: – having gðt Þ, one may predict the response y ðt Þ corresponding to any excitation x ðt Þ; – having gðt Þ, one may explain any observed response y ðt Þ by finding the corresponding excitation x ðt Þ; – having sufficiently rich excitation x ðt Þ and the response to it y ðt Þ, one may explain dynamical properties of the object under study by finding gðt Þ. In research or design practice, all those problems are solved using uncertain (and sometimes incomplete) data; their solutions are, therefore, uncertain, and sometimes ambiguous. Discrete representations of the solutions to the problems of the second and third category – obtained by means of the procedures of deconvolutions – are not unique; the selection of the most appropriate option requires additional a priori information about the class of admissible solutions and about the uncertainty of the data.

63 cf. the following internet documents for definitions: http://cig.fi.upm.es/articles/2014/Bielza2014-FrontCompNeur.pdf; http://www.speciesgame.com/forum/viewtopic.php?t=1931; https://en. wikipedia.org/wiki/Kernel_method; http://www.statsoft.com/Textbook/Multivariate-Adaptive-Re gression-Splines; https://pdfs.semanticscholar.org/86e1/9e350a5e2ee50b9d4a317928f80210780451. pdf; [2017-05-16]. 64 at the address https://machinelearningmastery.leadpages.co/leadbox/147764973f72a2% 3A164f8be4f346dc/5752754626625536/ [2017-05-16].

7.5 Computer-aided explanation

141

The inverse inferences from the data to their source – to the causes and unobservable entities65 – may be relatively simple and reliable if mathematical models of data are deterministic and invertible, and the random errors corrupting those data are negligible. The complexity and uncertainty of explanation, based on inverse inference, is growing, however, if some of those conditions are not satisfied. This is an issue of theoretical significance, as a special case of abductive reasoning it is also of experimental and practical importance. In many decision-support systems, e.g., the cardinality of the domain of variables modelling the effects is much lower than that of the domain of variables modelling causes, and randomness is omnipresent. Example 7.17: One of the important functions, implemented in the systems for monitoring of elderly persons, is fall detection. Such a system is expected to alert the healthcare personnel when the fall is identified on the basis of some measured features characterising the behaviour of the monitored person, e.g., the three-dimensional acceleration of selected points of his body, organised in a real-valued vector f. Given this vector, a machine-learning algorithm of classification is generating a value of a Boolean scalar variable y: T if launching of the alert procedure is advisable, F – otherwise. The causal chain in this case is as follows: x 2 X ! f 2 F ! y 2 fT , F g where X is a set of all possible elementary behaviours of a monitored person that may be classified as falls or non-falls, F is a set of all possible vectors of features f which may be attributed to the elements of X. The monitoring system, which has passed the exploitation tests, is characterised by four indicators: – the probability of detecting a fall when it has happened, – the probability of detecting a fall when it has not happened, – the probability of not-detecting a fall when it has not happened, – the probability of not-detecting a fall when it has happened. The explanation of a single outcome, T or F , consists in identification of a possibly narrow subset of X which might contain the specification of the behaviour being the cause of that outcome. The above-listed indicators and the model of the causal chain x ! f ! y are insufficient for this purpose: a lot of additional information on the monitored person, his environment and on the history of the system functioning must be included in the explanation. Another example of the methodology for computational explanation, with applications to detection of credit fraud and face recognition, may be found in the 2016 article “A Model Explanation System”66.

65 P. Humphreys, “Models of Data and Inverse Methods”, 2014. 66 R. Turner, “A Model Explanation System”, Proc. 2016 IEEE Int. Workshop on Machine Learning for Signal Processing (Salerno, Italy, September 13–16, 2016), pp. 1–6.

142

7 Scientific explanation

The goal of any causal explanation is to disclose causal patterns67, such as deterministic patterns listed and discussed in Section 7.4, in the context of research objectives relative to a particular research project68. Computational modelling enables one to analyse networked causal relationships or aggregates of causal mechanisms including both deterministic patterns and their statistical generalisations. Directed graphs are used in machine learning to encode qualitative statements about causal relationships. Such graphs are composed of nodes connected by directed edges: an edge linking a node Ni with a node Nj is carrying information that Ni is a direct cause of Nj , i.e., various interventions on Ni will impact the probability distributions characterising Nj 69. The methods for processing acyclic directed graphs – which, by definition, are not adequate for representing causal networks with loops – are much more developed than the methods for analysis of directed graphs containing loops. Thus, computational modelling is a partial remedy for problems implied by growing complexity of causal explanation. The modelling of real-world events and phenomena, being effects of multiple entangled causes, may be simplified if not all the causes are important for prediction or explanation. A stratagem, applied for this purpose, consists in collective modelling of secondary causes by means of random variables. Example 7.18: The voltage response of any sensor to a non-electrical quantity is the combined effect of many internal and external factors. If the sensor is to be applied for converting time-invariant non-electrical quantity (x) into time-invariant voltage (y), then its principal model may have the form of a scalar function y = f ð x Þ. Regardless of the accuracy in the determination of this function during sensor calibration, the measured values of y will differ from those predicted because of thermal noise in the sensor – on the one hand – and variation of such environment parameters as temperature, humidity and electromagnetic disturbances – on the other. All those factors may be taken into account in the extended model of the sensor by adding a random variable η to f ð x Þ: y~ = f ð x Þ + η. The probabilistic characteristics of this variable should be chosen in such a way as to make possible realistic evaluation of uncertainty of y~ due to neglected secondary causes underlying sensor operation.

The above-exemplified stratagem may be a useful tool for dealing not only with secondary causes but also with some factors conditioning the importance of primary causes. A virus may cause a disease provided the immune system of a patient is weak enough. The strength of this system depends on many

67 A. Potochnik, “Causal Patterns and Adequate Explanations”, Philosophical Studies, 2015, Vol. 172, No. 5, pp. 1163–1182. 68 ibid. 69 R. Silva, “Causality”, [in] Encyclopedia of Machine Learning and Data Mining (Eds. C. Sammut, G. I. Webb), Springer, New York 2017.

7.5 Computer-aided explanation

143

factors, and the simplest way to characterise it is by means of random variables. Computational modelling may be viewed as qualitatively different from traditional mathematical modelling, referring to variables and equations, because it enables researchers to incorporate various semantic models which for practical reasons (limited “computing power” of human brains) were not considered as candidates for mathematisation. Example 7.19: For centuries, maps have been used as graphical models of continents, regions, countries, cities, etc. Today they appear in digitised versions, which means that they may be processed by computers, cell phones or other specialised devices. This is feasible because any graphical model, being a type of semantic model, can be easily transformed into a mathematical model being a matrix of pixels, each pixel having the form of a vector of attributes, such as x-y coordinates and intensities of three basic colours, e.g., red, green and blue. Such a mathematical structure may undergo various mathematical operations, such as determination of the distance between two selected points or recognition of rectangular shapes. When implemented in a computer, the mathematical model of a city – for example, of Warsaw – may be used for answering various howand why-questions e.g.: – “How to get from Central railway station to the main building of Warsaw University of Technology?” – “Why the trajectory along the Emilia-Plater street is a better option than the trajectory along the Chałubiński street?”

The authors of a 2013 conference paper70 evaluated four computational models of explanation in Bayesian networks by comparing model-based predictions with human judgments. They concluded that, since humans still outperform artificial systems in inductive judgments, the models may benefit from a closer match to human judgments. According to the American philosopher of science Paul Humphreys: Computational science introduces new issues into philosophy of science because it uses methods that push humans away from the centre of the epistemological enterprise. [. . .] For an increasing number of fields in science, an exclusively anthropocentric epistemology is no longer appropriate because there now exist superior, non-human, epistemic authorities.71

The widespread use and rapid development of the computational tools for modelling and explanation in science seems to confirm this conclusion. Here is a recent example.

70 M. Pacer, J. Williams, Xi Chen, T. Lombrozo, T. L. Griffiths, “Evaluating Computational Models of Explanation using Human Judgments”, Proc. Conference on Uncertainty in Artificial Intelligence (Bellevue, Washington, USA, July 11–15, 2013). 71 P. Humphreys, “Computational Science and Its Effects”, [in] Science in the Context of Application (Eds. M. Carrier, A. Nordmann), Springer, Dordrecht 2011, pp. 131–142.

144

7 Scientific explanation

Example 7.20: The authors of the 2017 article “Reconciling Solar and Stellar Magnetic Cycles with Nonlinear Dynamo Simulations”72 had accomplished an extensive programme of computational experiments which exhibited cycles of magnetic activity of Sun and other solar-type stars, varying systematically with stellar rotation and luminosity. In this way, they explained a fundamental mechanism which determines the length of these cycles, viz. the finding that it is inversely proportional to the Rossby number73, which quantifies the influence of rotation on turbulent convection. The Sun’s magnetic poles, e.g., flip every 11 years; consequently, this is a half-period of the changes in the number of sunspots or levels of radiation, which have been observed for a long time.

7.6 Explanatory pluralism The Greek economist and philosopher Chrysostomos Mantzavinos (*1968), in his 2016 book Explanatory Pluralism74, argues that search for a monistic scheme of scientific explanation is a vain enterprise. He is showing that not only the schemes elaborated by Carl G. Hempel, which were supposed to account for “all-and-every” kind of explanation in science, fail to fit the purpose in many important research contexts, but also the main alternative solutions developed on the wave of criticism of those schemes. The latter conclusion applies, in particular, to: – the Wesley C. Salmon’s causal mechanistic scheme which claims that an explanation consists in the identification of mechanisms understood as entities and activities organised in such a way that they are productive of regular changes from start to termination conditions; – the Philip S. Kitcher’s unification scheme which claims that explanations are deductive arguments that provide understanding by fitting the particular facts and events within a general theoretical framework; – the Bastiaan C. van Fraassen’s pragmatic account of explanation which claims that explanation is not a relationship like that of description, i.e., a relationship between theory and fact but rather a three-term relationship, i.e., between theory, fact, and context; – the James Woodward’s manipulationist account of explanation which claims, relying on invariant generalisations rather than covering laws, that an

72 A. Strugarek, P. Beaudoin, P. Charbonneau, A. S. Brun, J.-D. do Nascimento, “Reconciling Solar and Stellar Magnetic Cycles with Nonlinear Dynamo Simulations”, Science, 2017, Vol. 357, No. 6347, pp. 185–187. 73 The Rossby number is a dimensionless constant used in describing fluid flow; it is named after the Swedish-American meteorologist Carl-Gustav A. Rossby (1898–1957). 74 C. Mantzavinos, Explanatory Pluralism, Cambridge University Press, Cambridge (UK) 2016.

7.6 Explanatory pluralism

145

explanation primarily answers a “what-if-things-had-been-different question”, i.e., that an explanation primarily enables us to see what sort of difference it would have made for the explanandum if the factors cited in the explanans had been different in various possible ways; – the Michael Strevens’ kairetic scheme which claims that explanation is a matter of identifying those causal influences on a phenomenon that are relevant to its occurrence, demanding more specifically that the explanation is not missing parts and that every aspect of the causal story represented by the explanatory model makes a difference to the causal production of the explanandum. According to the pragmatic account, “. . . scientific explanation is not (pure) science but an application of science. It is a use of science to satisfy certain of our desires; and these desires are quite specific in a specific context, but they are always desires for descriptive information”75. Example 7.21: The adequate explanation of the cause of a road accident will depend on the asking person: – the answer “because the driver has been drunken” will be good for a policeman; – the answer “because the road has been not properly profiled” will be good for a civil engineer; – the answer “because the driver has been depressed after a family quarrel” will be good for a psychologist.76 Moreover, the adequate explanation provided in response to the question “Why did Ms. X kill Mr. Y?” will depend on whether the logical accent is placed on “X”, on “kill” or on “Y”77.

According to Chrysostomos Mantzavinos, all those explanatory offerings are insufficient since they all share an explicit or implicit appeal to causality. This is an important constraint because not all instances of explanation are causal and because appeals to causality are often based on various philosophical assumptions which are not generally accepted by research communities. Hence, the need for an account of explanation that does not rely on any particular ontological commitment, the account rooted in the observation that: – the scientific explanation is a social project undertaken by humans all over the history, dedicated to providing answers to why-questions; – answers to why-questions only make sense relative to particular contexts, but they are everywhere governed by rules.

75 B. C. van Fraassen, The Scientific Image, 1980, p.153. 76 inspired by: A. Grobler, Metodologia nauk, 2006, p. 113. 77 A. Rosenberg, Philosophy of Science – A contemporary introduction, Routledge, New York – Abingdon (UK) 2005, p. 42.

146

7 Scientific explanation

Chrysostomos Mantzavinos introduces the notion of explanatory games with rules and concludes: “Different people play different explanatory games which give rise to different outcomes”78. He provides the following classes of rules: – constitutive rules which enable one to determine what counts as an explanandum, what must be taken as given and what metaphysical presuppositions are allowed; – rules of representation which enable one to determine whether the explanation should appeal to linguistic representations, including mathematical expression, or visual representations such as diagrams or pictures; – rules of inference; – rules of scope, which comprise the instructions about where to apply the explanatory practices of the game and how to apply the game to new phenomena.79 Example 7.22: Chrysostomos Mantzavinos is illustrating his approach with the evolution of medical understanding of the cardio-vascular system from the times of Aelius Galenus (second century) to the times of William Harvey (seventeenth century)80. In the second century: – The constitutive rules included a clear outline of the explanandum (the role of the heart in blood circulation) and a series of metaphysical suppositions concerning the teleological nature of all life, and the notion of pneuma (life-giving spirit). – The rules of representation largely specified direct sensory detection (sight, taste, touch, smell, sound) as these became available during dissection and vivisection. – The rules of inference included those of logic (e.g. consistency) and analogical reasoning, from non-human animals available for dissection and vivisection to humans not usually so available. – The rules of scope limited circulation to a purely mechanical process of ebb and flow, directionless and slow. In the seventeenth century: – The constitutive rules had changed regarding metaphysical suppositions – no longer was it necessary to postulate life-giving spirits. – The rules of representation began to include rich and relatively accurate diagrams based on more readily available human cadavers. These diagrams made clear the presence of valves in the arteries and veins that allowed the blood to flow in one direction only. The rules of representation also accommodated the introduction of diagrams based on the new technology of microscopy. – The basic rules of inference were enhanced with the analogy of the heart as a pump and methods of mathematical inference: William Harvey approximated the amount of blood in a chamber of the heart, imagined some smaller amount of that blood being pumped throughout the body with each beat, and multiplied that number by the number of beats in a minute, an hour, a day. By simple arithmetic, he concluded that the entire system

78 C. Mantzavinos, Explanatory Pluralism, 2016, p. 35. 79 ibid., Section 6.1. 80 ibid., Section 6.3.

7.6 Explanatory pluralism



147

must be connected, and a given amount of blood circulated rather than renewed with each beat. The rules of scope directed explanatory attention not only to the cardio-vascular system but also to the nutrition- related processes.

Explanatory pluralism seems to be of particular attraction for new interdisciplinary fields of research, especially for cognitive science combining methodological traditions of such different fields as linguistics, psychology, artificial intelligence, philosophy, neuroscience and anthropology. Maria Serban, in her Ph.D. thesis Towards Explanatory Pluralism in Cognitive Science81, has sought to disclose the intricate relationships linking various explanatory frameworks currently used within cognitive science. She has attempted to clarify whether the sort of scientific explanations, proposed for various cognitive phenomena at different levels of analysis or abstraction, differ in significant ways from the explanations offered in other areas of scientific inquiry, such as biology, chemistry or physics, i.e., to answer the question whether there is a distinctive feature that characterises cognitive explanations and distinguishes them from the explanatory schemes utilised in other scientific domains82. She has concluded that the class of explanatory frameworks, used to investigate cognitive phenomena, is very heterogeneous. The multiplicity of explanatory schemes, used by practicing cognitive scientists, mirrors both the interdisciplinarity of the domain, and the variety of cognitive phenomena currently investigated in different branches of cognitive science83. Consequently, the explanatory pluralism, encountered within the daily practice of cognitive scientists, has an essential normative dimension: it seems to be an appropriate standard for the research methodology of cognitive science. The situation is quite similar in neuroscience being also an interdisciplinary field of study, combining the methodological traditions of psychology, biology and computer science84. The “common denominator” of all the paradigms and methodologies of scientific explanation is the use of semantic, mathematical or mixed models of the objects, phenomena and events under study. As Nancy J. Cartwright (*1957) pointed out already more than 30 years ago: “To explain a phenomenon is to find a model that fits into the basic framework of the theory and that thus allows us to derive analogues for the messy and complicated phenomenological laws which are true of it”85. The

81 M. Serban, Towards Explanatory Pluralism in Cognitive Science, Ph.D. Thesis, School of Philosophy, University of East Anglia, Norwich 2014. 82 ibid., p. 6. 83 ibid., p. 40. 84 C. E. Stinson, Cognitive Mechanisms and Computational Models: Explanation in Cognitive Neuroscience, Ph.D. Thesis, 2013. 85 N. Cartwright, How the Laws of Physics Lie, 1983, p. 161.

148

7 Scientific explanation

concept of modelling in this context should be given a broad inclusive interpretation: as it has been demonstrated in Chapters 5–6, any piece of knowledge may be viewed as a model of reality, regardless it refers to a singular object or event or to a broad class of objects or events (scientific laws and theories). That’s why deep and multiaspectual understanding of modelling, especially of mathematical modelling, is a key to the art of scientific explanation.

8 Context of discovery The title of this chapter and the title of the next chapter refer directly to a fundamental distinction, attributed to Hans Reichenbach1, emphasising qualitative difference between two stages of scientific work, viz. generation of hypotheses (in the context of discovery) and their transformation into theories or other pieces of scientific knowledge (in the context of justification). According to some interpretations, the context of discovery, unlike the context of justification, should be a subject of study belonging to psychology and sociology rather than to philosophy of science because the act of conceiving a new idea is not necessarily a rational process, and therefore cannot be regulated2. Alternative interpretations, however, stress the importance of the presence of logical components in the process of discovery, which are of interest for philosophers of science. Thus, the context of discovery is an interdisciplinary topic being a subject of reflection whose conclusions may have (and actually have) significant impact on technoscientific research.

8.1 Discovery versus invention The term discovery3 means the act of finding or detecting something unknown or something forgotten or something discarded in the past as meaningless. In science, this may be the detection of a new object (e.g. a star), a new phenomenon (e.g. radioactivity) or a new event (e.g. appearance of a sunspot) – followed by reasoning aimed at linking this fact with previously acquired knowledge. A new discovery may sometimes be based on earlier discoveries or ideas. Questioning, being a major form of human thought and interpersonal communication, plays a key role in discovery. The questions may come both from within science and from its social environment. That’s why some discoveries may have direct influence both on the understanding of already accumulated knowledge and on the development of new objects, processes or techniques, needed by the society. Originally, the term discovery referred to material world, but it started to be applied to abstract entities4 already at the beginning of the twentieth century. It should be noted that the contents of scientific hypotheses, especially in technoscience, may be invented rather discovered. In general, researchers discover 1 In fact, the Reichenbach’s view of this issue was not free of ambiguity, cf.: C. Glymour, F. Eberhardt, “Hans Reichenbach”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Winter 2016 Edition, https://plato.stanford.edu/archives/win2016/entries/reichenbach/ [2017-09-30]. 2 S. Psillos, Philosophy of Science A–Z, 2007, p. 49. 3 from Latin dis- = “opposite of” + cooperire = “to cover up”. 4 cf., for example: I. Bratko, “Discovery of Abstract Concepts by a Robot”, Proc. 13th International Conference on Discovery Science (Canberra, Australia, October 6–8, 2010), pp. 372–379. https://doi.org/10.1515/9783110584066-008

150

8 Context of discovery

something existing or invent something non-existing, but the distinction between both situations is less evident than it may seem at first glance. Some discoveries represent a radical breakthrough in accumulation of scientific knowledge; today, however, the majority of them appear as the result of conscious or unconscious transformation or adaptation of the existing concepts or methods; here the boundary between discovery and invention starts to be blurred5. It should be noted, moreover, that those who believe that scientific laws reflect laws of nature will be inclined to say that they discover them when generating the corresponding hypothesis; those who are agnostic about the relationship between ideas and reality (e.g. instrumentalists) will prefer to speak about invention. In general, the term invention6 means something that has never been made before, or the process of creating something that has never been made before – a unique or novel idea concerning something material (e.g. a device, an instrument, a system, etc.) or something abstract (e.g. a concept, a method or a hypothesis). Therefore, inventions are of particular importance for overall engineering practice, especially inventions in their restricted sense as defined by legal acts concerning patents, i.e., objects being novel, not obvious for experts skilled in the relevant field and useful or industrially applicable (cf. Section 18.3). In the age of technoscience, because of the convergence of engineering and research methodologies, inventions started to be of equal importance for scientific research as they used to be for engineering. Today’s scientists invent not only hypotheses but also structures of mathematical models, experimental setups and testing methodologies, mathematical tools for planning and conducting experiments, languages of description, arguments and methods of reasoning as well as syntheses of technoscientific knowledge. Generation of hypotheses, regardless whether aimed at discovery or invention, is the most creative activity among those specific of the scientific method. It seems to be, therefore, appropriate – from methodological point of view – to make a zoom on the factors conditioning scientific creativity in general.

8.2 Theoretical aspects of technoscientific creativity Science is creative in much the same way as arts are. The factors influencing scientific creativity of an individual researcher may be roughly classified into three categories: (1) his personal traits and predispositions, (2) institutional and organisational arrangements, (3) elements of the social, economic and cultural environment. It should be stressed that generation of hypotheses is not the only creative activity in

5 cf. the Wikipedia article “Discovery (observation)” available at https://en.wikipedia.org/wiki/Dis covery_(observation) [2017-01-02]. 6 from Latin in = “in” + venire = “to come” or “to come upon”.

8.2 Theoretical aspects of technoscientific creativity

151

the research process; both formulating research-related questions and answering them requires some creativity: starting from the question “What to study?” and ending up with the question “What and how to publish?” Therefore, what is to be said in this and in the following sections about creativity does apply not only to generation of hypotheses but also to scientific creativity in general. Human creativity has been a subject of multidisciplinary studies for centuries. It has been approached from various perspectives, and causally explained by numerous theories which have implied diverse models of creative processes. The 2013 comparative review of major theories of creativity7 is devoted to ten approaches, each generating one or more theories. If the explanatory and predictive power of scientific knowledge is concerned, four of them – developmental, economic, cognitive and evolutionary approaches – seem to be most attractive. It is, however, very probable that the pluralistic approaches may turn out to be most promising. Example 8.1: Robert J. Sternberg proposed the investment theory8, according to which creativity requires a confluence of six interrelated resources: intellectual skills, knowledge, styles of thinking, personality, motivation and environment.

Human creativity is sometimes viewed as a by-product or correlate of psychic disorders of various intensity, e.g., as a reaction to difficult circumstances or repressed emotions, as a mild symptom of a mental illness, as a psychotic rejection of social or cultural norms and conventions or as a by-effect of addiction to alcohol or illicit drugs9. Although the co-appearance of extraordinary creativity and psychic disorders has been observed and documented quite frequently, there is no evidence that any psychic disorder is a necessary or sufficient condition of human creativity. The most plausible seems to be a hypothesis that both creativity potential and predispositions to psychic disorders are correlated because they have common genetic or environmental substrate10. If the structure of a creative process is concerned, a model proposed by the English social psychologist and educationalist Graham Wallas (1858–1932), in his 1926 book The Art of Thought, still remains very popular11. It comprises four phases: preparation (defining a problem), incubation (setting the problem aside 7 A. Kozbelt, R. A. Beghetto, M. A. Runco, “Theories of Creativity”, [in] The Cambridge Handbook of Creativity (Eds. J. C. Kaufman, R. J. Sternberg), Cambridge University Press, New York 2010, pp. 20–47. 8 R. J. Sternberg, “The Nature of Creativity”, Creativity Research Journal, 2006, Vol. 18, No. 1, pp. 87–98. 9 cf. the Wikipedia article “Creativity and Mental Illness” available at https://en.wikipedia.org/ wiki/Creativity_and_mental_illness [2017-10-17] 10 P. J. Silvia, J. C. Kaufman, “Creativity and Mental Illness”, [in] The Cambridge Handbook of Creativity (Eds. J. C. Kaufman, R. J. Sternberg), Cambridge University Press, New York 2010, pp. 381–394. 11 H. Kanematsu, D. M. Barry, “Theory of Creativity”, [in] STEM and ICT Education in Intelligent Environments, Springer, Cham (Switzerland) 2016, pp. 9–13.

152

8 Context of discovery

for a period of time), illumination (the time when the new idea appears) and verification (checking out that idea), where the phases of incubation and illumination include subconscious components of a creative process. Some newer works suggest, however, a more complex structure of the creative process than a linear sequence of four phases, e.g., a dynamic combination of several mutually reinforcing sub-processes. One of its versions is a cognitive-spiral model proposed by Edward S. Ebert (*1953) in 199412. This model refers to five modes of thought: perceptual thought, creative thought, inventive thought, metacognitive thought and performance thought. Any creative process is a chain of mental actions, each referring to one of those modes. It is visualised with a spiral rather than with a straight line because even if the same mode of thought is used several times, the process does not return to its initial state. Psychological studies on behavioural dispositions of creative scientists suggest that they share such personality traits as confidence, openness, dominance, independence, introversion as well as arrogance and hostility; as a rule, they have outsider status: they are socially deviant and diverge from the mainstream13. By no means, those dispositions are sufficient conditions of the scientific productivity; they are only contributing factors which may be amplified or attenuated by numerous life circumstances. Example 8.2: The author of the article “The Dimensions and Dialectics of Creativity”14 interviewed, in the period 1983–2008, 40 prominent researchers active in the field of social anthropology and 34 major representatives of the other academic disciplines, viz. biology, zoology and ethology, physiology and medicine, chemistry and biochemistry, astronomy and cosmology, physics and geophysics, mathematics, computing and technology, history and philosophy of science. Since his purpose was to identify the key life factors influencing creativity, he asked them about the following issues: – the occupation, temperament and possible effects of grandparents and parents; – the first hobbies; – the primary and secondary schools attended, important teachers and classmates, sport and game activities, favourite books and music; – the profile of higher education completed, important professors, supervisors and mentors; – the early research experience, collaborators and friends; – the family life, partners and children; – the teaching and administrative activity; – the material infrastructure of work; – the philosophical, religious and political views and activities.

12 E. S. Ebert, “The Cognitive Spiral: Creative Thinking and Cognitive Processing”, Journal of Creative Behavior, 1994, Vol. 28, No. 4, pp. 275–290. 13 J. Schickore, “Scientific Discovery”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Spring 2014 Edition, https://plato.stanford.edu/archives/spr2014/entries/scientific-discovery/ [2017-09-30]. 14 A. MacFarlane, “The Dimensions and Dialectics of Creativity”, Euresis Journal, 2012, Vol. 2, pp. 53–76, www.euresisjournal.org [2017-10-08].

8.2 Theoretical aspects of technoscientific creativity

153

The analysis of the accumulated material has demonstrated that all the above-mentioned factors might have had significant impact on the creativity of the interviewed scientists, although their relative contributions to the creativity potential certainly varied from one person to another.

The exposure and internalisation of multiple cultures seem to be a factor strongly boosting personal creativity. Example 8.3: As an illustration of that hypothesis, the overrepresentation of Ashkenazi Jews in the inventories of scientific achievements of the last 150 years is frequently recalled, with special emphasis on the persons having their family roots in Central and Eastern Europe. According to the Charles Murray’s estimates: – The ratio of the numbers of actual to expected significant figures of Jewish origin in the combined 1870–1950 scientific inventories is 6:1 with the maximum for mathematics – 12:1; the corresponding ratio for philosophy is even higher – 14:115. – The corresponding ratio of actual to expected Nobel Prize winners of Jewish origin is 14:1 for medicine and physics16. The Kaiser Wilhelm Society for the Advancement of Science17 was an umbrella organisation for a number of German research institutions in the period 1911–1946. According to a historical analysis, made by Rogers Hollingsworth18, 14 prominent figures of science, including 11 Nobel Prize winners, were associated with those institutions; among them – 11 scientists of Jewish origin. When trying to explain the extraordinary level of accomplishments among Ashkenazi Jews, Charles Murray has indicated both genetic and environmental factors, but he has recognised the extremely high value attached to learning by traditional Jewish families as the only “undisputable” causal factor of this phenomenon19.

When analysing the history of modern science, Rogers Hollingsworth noted that: – From around 1735 to 1840, France was the world’s centre of scientific creativity, especially in the area of physiology, chemistry, physics and mathematics. – At the middle of the nineteenth century, the nexus of scientific creativity started to shift to Germany; in the first 11 years of awarding Nobel Prizes, 13 German scientists (many more than from any other country) received awards in chemistry, medicine and physics. – At the beginning of the twentieth century, the hub began to shift to Great Britain; over the next half century, this country boasted numerous Nobel Prize winners in physics, biology and chemistry.

15 C. Murray, Human Accomplishment, 2003, p. 283. 16 ibid., p. 279. 17 the original German name: Kaiser-Wilhelm-Gesellschaft zur Förderung der Wissenschaften. 18 R. Hollingsworth, “Factors Associated with Scientific Creativity”, Euresis journal, 2012, Vol. 2, pp. 77–112, www.euresisjournal.org [2017-10-08]. 19 C. Murray, Human Accomplishment, 2003, pp. 291–293.

154

8 Context of discovery

– During the Second World War, the USA had taken over the leadership; since then, American scientists have received more than half of the most prestigious awards in the biomedical sciences, such as Nobel, Lasker, Horwitz and Crafoord Prizes; American researchers have dominated scientific journals, accounting for more than 50% of the top 1% of the cited articles.20 When trying to explain the above-outlined tendency, Rogers Hollingsworth indicated a number of organisational factors which facilitate the making of major discoveries, viz.: – moderately high scientific diversity; – capacity to recruit scientists who internalise scientific diversity; – fostered communication and social integration of scientists from different fields; – capacity to select and support of leaders who integrate scientific diversity, are able to understand the direction in which scientific research is moving, provide rigorous criticism in a nurturing environment, have a strategic vision for integrating diverse areas and are able to secure funding to achieve organisational goals; – flexibility and autonomy associated with loose coupling with the institutional environment.21 The historical overview of science development enabled Rogers Hollingsworth to also identify organisational factors which constrain the making of major discoveries, viz.: – sharp and rigid boundaries among units of the institution; – hierarchical and centralised decision-making about research programmes and the number of personnel; – bureaucratic control over the budget and work conditions; – bureaucratic management based on advanced standardisation and use of rules and procedures; – hyperdiversity making impossible effective communication among actors in different fields of science or even in similar fields.22 In a broad social perspective, the scientific creativity is driven by human needs. In his famous 1943 article “A Theory of Human Motivation”, the American psychologist Abraham H. Maslow (1908–1970) proposed a hierarchy of human needs23. Its basic

20 R. Hollingsworth, “Factors Associated with Scientific Creativity”, 2012. 21 ibid. 22 ibid. 23 A. H. Maslow, “A Theory of Human Motivation”, Psychological Review, 1943, Vol. 50, No. 4, pp. 370–396.

8.3 Practical aspects of technoscientific creativity

155

four layers comprised: the need of esteem, the need of friendship and love, the need of security, the physical needs, and – in a later version – also the need of innate curiosity. According to this theory, the lower-level needs must be met before a person will be interested in satisfying the higher-level needs. Creativity seems to be a positive premise of satisfying all the needs; so, creativity may be motivated by all human needs. The major motive in the search for increasingly reliable knowledge has been, however, curiosity which arises from the unexpected contrasts between what is expected and what is found. In a broad historical perspective, human creativity has been also driven by a desire to find a moral meaning and purpose of life, moral norms governing the world and the rules of ethical behaviour. Of course, there have been always scientists driven by a desire of esteem, greed of power or money. Moreover, new knowledge has been also needed as a means of education. When summarising the foundations of human creative accomplishments, Charles Murray wrote: “The nature of accomplishment in a given time and place can be predicted with reasonable accuracy given information about that culture’s status with regard to the four dimensions of purpose, autonomy, organizing structure, and transcendental goods”24. This summary applies, in particular, to scientific creativity.

8.3 Practical aspects of technoscientific creativity A famous saying “there is nothing more practical than a good theory” – attributed to James Clerk Maxwell, Ludwig E. Boltzmann (1844–1906), Albert Einstein and Kurt Lewin (1890–1947) – does apply, in a very special way, to scientific creativity. All the pieces of already accumulated knowledge have a potential to provoke new questions, to open new research perspectives and to suggest new solutions – thus the potential to stimulate and support creativity. Scientific laws and theories, as well as mathematical models of material objects and phenomena may be instrumental at all stages of the research process, but especially in formulation of the research problems and generation of hypotheses about their possible solutions. Example 8.4: After efficacious telescopes were developed in the nineteenth century, astronomers noted that Uranus’s orbit departed slightly from the trajectory predicted on the basis of the Newtonian mechanics. A hypothesis was put forward that the Uranus’s motion was being affected by the gravitational field of another unknown planet. The orbit of this unknown planet was calculated, using the Newtonian mechanics, and in this way the planet Neptune was discovered, just where it was predicted to be.25

24 C. Murray, Human Accomplishment, 2003, p. 451. 25 R. Johns, “Inference to the Best Explanation”, 2008.

156

8 Context of discovery

Philosophy of science, as a meta-theory or methodology of developing scientific theories, may also enhance creativity in research practice. It is reconstructing successful science practices and finding some invariants or regularities whose awareness of may help researchers by giving them a kind of heuristic tools. Some authors go even further when saying that the lack of a philosophical background may be a serious disadvantage, especially for those scientists working on innovations that need a change in their way of thinking about the world. The authors of the 2016 article “How philosophy can help in creative thinking”26 argue that philosophy “can help” because of the hermeneutical character of its discourse. They try to demonstrate that even a person who has a good command over a relatively limited set of philosophical categories and rules of reasoning can generate unexpected new ideas when confronted with a new and demanding situation: the mind “plasticity”, resulting from philosophical education, can become an engine of creativity. Philosophical methods of thinking – they claim – enable that person to successfully confront the problem situations in technoscience (or in business) by looking at those problems in a different light or from new angles. Moreover, philosophy seems to be particularly well-equipped to help that person to focus more on hypothetical or “as-if” modes of thinking. It offers a complex apparatus of general concepts and methods of thinking, which can be a starting point for a multitude of novel solutions, when combined with problemspecific knowledge. By no means, however, philosophical approaches to creative pursuit of knowledge can eliminate unpredictable factors of scientific creativity. The ways leading to discovery may be very diversified and may depend on disciplines and types of problems under study. On the one hand, a discovery may be the result of a long systematic search focused on a well-defined hypothesis, on the other – it may happen suddenly, as a consequence of a coincidence of favourable circumstances or of an accidental error; Christopher Columbus (1451–1506) was not the only explorer in the history of human civilisation who discovered something else than he had intended to discover. There is even a special term for the phenomenon of unexpected discoveries, i.e., serendipity which means a “fortunate happenstance” or “pleasant surprise”. Recognising that an accidental event or observation can open a new avenue for exploration, the researchers may systematically try to improve something by making it more effective or efficient, easier to use or to serve more purposes, to be longer lasting or cheaper, to be more ecologically friendly or more ergonomic etc. The method of trial and error, the method of analogy, the method of modelling and simulation – these are the most popular techniques supporting inventiveness in

26 G. Hołub, P. Duchliński, “How Philosophy Can Help in Creative Thinking”, Creativity Studies, 2016, Vol. 9, No. 2, pp. 104–115.

8.3 Practical aspects of technoscientific creativity

157

applied sciences. Sometimes an inventor is successful because of his disregard for the boundaries between distinct specialties or fields27. Brainstorming28 and synectics29 are best known problem-solving methodologies making use of the psychological determinants of creativity. It is well known that playing may arouse imagination, and therefore stimulate inventiveness. The need to play with things of interest is an internal drive which brings researchers to novel creations. Sometimes they make new inventions or put forward new ideas spontaneously while they are daydreaming, especially when their minds are free from everyday concerns. New ideas can also arise when the conscious mind turns away from the research problem and focuses on something else. It is an empirically confirmed observation that walking, especially mindful walking, is an effective stimulus of creativity30. Similar observations have been made with respect to meditation: neuroimaging studies have confirmed the impact of meditation on brain structure and function, in particular – its influence on activation of brain areas involved in focused problem-solving31. Best confirmed are, however, positive follow-ups of the exposure to music, especially – of musical activity like playing violin or singing. Example 8.5: There are numerous research studies indicating that listening to classical music (especially Mozart’s music) may induce a short-term improvement of certain cognitive and learning capacities32. In light of the results of those studies, the correlation of outstanding scientific achievements with music-related avocations seems to be less surprising. According to the historical analysis made by Rogers Hollingsworth33, among the 14 prominent figures of science, associated with the Kaiser Wilhelm Society for the Advancement of Science, 8 were practising musicians: Albert Einstein, James Franck, Otto Hahn, Hans A. Krebs, Lise Meitner, Otto F. Meyerhof, Max K. E. L. Planck and Axel H. T. Theorell. The German physicist Werner K. Heisenberg, awarded with the 1932 Nobel Prize for his contribution to the creation of quantum mechanics, was an accomplished pianist who needed to touch the instrument every day.

27 cf. the Wikipedia article “Invention” available at https://en.wikipedia.org/wiki/Invention [2018-07-02]. 28 cf. the Wikipedia article “Brainstorming” available at https://en.wikipedia.org/wiki/Brainstorm ing [2018-07-20]. 29 cf. the Wikipedia article “Synectics” available at https://en.wikipedia.org/wiki/Synectics [2018-07-20]. 30 M. Oppezzo, D. L. Schwartz, “Give Your Ideas Some Legs: The Positive Effect of Walking on Creative Thinking”, Journal of Experimental Psychology: Learning, Memory, and Cognition, 2014, Vol. 40, No. 4, pp. 1142–1152. 31 M. Boccia, L. Piccardi, P. Guariglia, “The Meditative Mind: A Comprehensive Meta-Analysis of MRI Studies”, BioMed Research International, 2015, No. 419808, pp. 1–11. 32 W. Verrusio, F. Moscucci, M. Cacciafesta, N. Gueli, “Mozart Effect and its Clinical Applications: A Review”, British Journal of Medicine & Medical Research, 2015, Vol. 8, No. 8, pp. 639–650. 33 R. Hollingsworth, “Factors Associated with Scientific Creativity”, 2012.

158

8 Context of discovery

Not only musical activities but also some other artistic avocations seem to enrich human cognitive potential: for many scientists, pursuing activities as a painter, writer or poet enhanced their skills in pattern formation and pattern recognition – the skills which turned out to be very productive when transferred back and forth between science and art34. One of the conditions of scientific creativity is a synergy of conscious and unconscious mental processes. Here are some practical tips which may help in developing this synergy: – Any creative process requires thorough preparation comprising both long-term education and ad hoc short-term learning on a subject under consideration. The general and specialist knowledge is necessary not only as the input information for conscious thinking but also as a stimulus for unconscious recuperation and processing of seemingly forgotten information in combination with newly acquired information. – The diversity of ideas and methodologies of information processing, taken into account in a creative process, is increasing the probability of its positive outcome, as well as the openness to unusual combinations of those ideas and methodologies. – Verbalisation of ideas, both in written and spoken forms, is also fostering the productive use of the unconscious mind because it requires translation of thoughts from the “language of the unconscious mind” to the “language of the conscious mind”; verbalisation not only clarifies ideas, but also leads to new ideas. – The creative activity requires the ability to play contradictory roles (or assume contradictory attitudes); one should be, at the same time sceptical about conventions, generally accepted assumptions, new pieces of evidence etc.; open to new pieces of information, including intuitions coming from the unconscious mind, and tolerant with respect to their uncertainty; sensitive to similarities between seemingly dissimilar things, and to dissimilarities between seemingly similar things; and not afraid of making mistakes. – The creative activity requires balanced interplay between concentration on the subject under consideration and its temporary abandonment. Intense conscious rumination over a sought-for solution is a strong stimulus for the unconscious mind. In non-trivial cases, however, it may be fully productive provided it is interrupted with breaks necessary for incubation of creative ideas in the unconscious mind.35

34 ibid. 35 inspired by: G. W. Ladd, Imagination in Research, Iowa State University Press, Ames (USA) 1987.

8.4 Generation of scientific hypotheses

159

8.4 Generation of scientific hypotheses A hypothesis to be generated is, as a rule, a potential solution to a cognitive or practical problem which has been formulated by the sponsor of a research project or by the researcher himself. Thus, the choice of that problem is motivated directly by the sponsor’s needs, and indirectly – by the needs of society; or by the intellectual and psychological background of the researcher. In both cases, the researcher’s understanding of the problem has a predominant role in methodological orientation of the research work: defining the scope of facts, events or phenomena to be studied – thus, the scope of observation, measurement and analysis. The determination of this scope is influenced by the theoretical and experimental experience of the researcher, as well as by his not always verbalised preferences. Even more, those factors influence methods and directions of search for solutions, selected by the researcher, and the way he is going to interpret intermediate results of the study. This aspect of the research process was first recognised by the French polymath and philosopher of science Pierre M. M. Duhem (1861–1916), and is known in the literature under the name of theory-ladenness of observation. Since the 1960s, it is quite frequently illustrated with images which are differently interpreted by an observer depending of his a priori mind-set; the most famous examples are the duck-rabbit image36, old-young lady ambiguity37, and the what-is-on-a-man’s-mind puzzle38. There is no purely perceptual experience, even though its theoretical interpretation is, largely, unconscious. Thomas S. Kuhn and Paul K. Feyerabend pushed the theory-ladenness thesis to its extremes, by arguing that each theory (or paradigm) creates its own realm of experience; it determines the meaning of all terms that occur in it, and there is no neutral language which can be used to evaluate concurrent theories (or paradigms)39. Technoscientific activities, aimed at a discovery or invention, are mainly of heuristic nature, but they may be supported by numerous methodological, organisational and psychological means overviewed in the previous section. The first category comprises both discipline-specific and general-purpose research methods and techniques, including those inspired by philosophy of science. Various modes of using computers for research purposes are today of particular importance among general-purpose means. Diverse types of computers, from tablets to networks of supercomputers, are employed at every stage of the research process for supporting both substantial and formal activities. If the creative aspect of the latter is concerned, the widespread availability of extensive databases of technoscientific information is

36 cf. the Wikipedia article “Ambiguous Image” available at https://en.wikipedia.org/wiki/Ambigu ous_image [2019-07-28]. 37 ibid. 38 J. Dean, “What is on a Man’s Mind?” Mighty Optical Illusions, October 30, 2007, https://www. moillusions.com/whats-on-mans-mind-illusion/ [2018-07-23]. 39 S. Psillos, Philosophy of Science A–Z, 2007, p. 170.

160

8 Context of discovery

worth being emphasised. More and more frequently this information is processed by software means, using tools of artificial intelligence, dedicated to problem-solving tasks. In particular, the hypotheses, regarded as potential solutions to predefined problems, are generated by means of computer programs employing methods of heuristic selective search. In this way, some rules are identified, which may be next used for efficiently finding solutions to new problems, without idle exploration of the corresponding problem space40. It should be noted that, when those techniques are used, the distinction between the context of discovery and the context of justification is getting fuzzy because the methodology of discovery at least partially also plays a justificatory role41.

8.5 Preselection of scientific hypotheses The American philosopher of science Norwood R. Hanson, in his 1958 book Patterns of Discovery, argued that the act of discovery – in particular, the act of suggesting a new hypothesis – follows a specific logical pattern which is different from both inductive and deductive reasoning. This special logic of discovery is the logic of abductive or “retroductive” inferences. The argument that it is through an act of abductive inferences that plausible, promising scientific hypotheses are devised goes back to Charles S. Peirce42. The distinction between deduction, on the one hand, and induction and abduction, on the other, corresponds to the distinction between necessary and non-necessary inferences (cf. Section 2.2). In deductive inferences, what is inferred is necessarily true if the premises from which it is inferred are true. In inductive inferences, a conclusion is reached on the basis of statistical information; in abductive inferences a conclusion is reached on the basis of explanatory considerations43. Charles S. Peirce introduced the term abduction for a research operation comprising generation and preselection of hypotheses for testing. According to the American philosopher of science William H. B. McAuliffe, the concept of abduction has been distorted by contemporary philosophers of science, who mistakenly claim that it is a conceptual precursor to a kind of inference to the best explanation44, discussed in the next chapter. It should be noted that the latter is supposed to be the last stage of inquiry, whereas abduction corresponds to the first stage of inquiry. According to Charles S. Peirce, the hypotheses preselected for

40 J. Schickore, “Scientific Discovery”, 2014, Subsection 6.2. 41 ibid., Section 8. 42 ibid., Subsection 6.1. 43 I. Douven, “Abduction and Inference to the Best Explanation”, [in] Encyclopedia of Philosophy and the Social Sciences (Ed. B. Kaldis), Sage, Thousand Oaks (USA) 2013, pp. 2–4. 44 W. H. B. McAuliffe, “How did Abduction Get Confused with Inference to the Best Explanation?” Transactions of the Charles S. Peirce Society, 2015, Vol. 51, No. 3, pp. 300–319.

8.5 Preselection of scientific hypotheses

161

testing should be experimentally verifiable, and potentially explain the facts in question. Moreover, some economic aspects (related to time and money) should be taken into account, i.e., a priority should be given to hypotheses which are unlikely, but (if false) can be refuted quickly, or are inexpensive to test, or are easily interpretable and relevant to a wide range of phenomena, and whose falsification would rule out entire classes of hypotheses to which they belong. Thus, for Charles S. Peirce, the term abduction did not quite mean what we mean by it nowadays. For him, abduction had its proper place in the context of discovery, the stage of inquiry in which we try to generate hypotheses, which may later be assessed. In particular, he saw abduction as a guided process of forming hypotheses, where explanatory considerations serve as the main guide45. Here this concept will be used in the modern sense as a method of reasoning leading to the selection of the best hypothesis (or the best subset of hypotheses). This interpretation of the concept of abduction is omnipresent today; it appears in the renowned encyclopaedias of philosophy, such as The Stanford Encyclopedia of Philosophy, The Oxford Companion to Philosophy or The Routledge Companion to Philosophy, as well as in the monographs on inference to the best explanation, such as the 2004 book Inference to the Best Explanation by Peter Lipton. Example 8.6: Let’s consider an electromagnetic signal sðt Þ from a distant star whose discrete values have been captured by means of a very sensitive measuring instrument: ..., ~s0 ffi sð0Þ, ~s1 ffi sðΔt Þ, ~s2 ffi sð2Δt Þ, ... where t is a real-valued variable modelling time, and Δt is a sampling interval. The signal sðt Þ is supposed to carry information on some nuclear processes taking place on the star. This information is coded mainly in the coordinates of the maxima of sðt Þ. Under an assumption that the measurement errors corrupting the data: ..., ~s0 − sð0Þ, ~s1 − sðΔt Þ, ~s2 − sð2Δt Þ, ... are negligible, one may try to reconstruct the function sðt Þ by means of an algorithm of interpolation, and next estimate those coordinates. The first question to be answered is related to the mathematical structure of the interpolating function ^sðt Þ. When having some a priori information about the source of the signal sðt Þ, including mathematical models of the relevant phenomena or processes, one may postulate ^sðt Þ having the form of a superposition or a functional of those models. In this case, the interpolating function ^sðt Þ may be nonlinear with respect to its parameters, but their number may be relatively limited and independent of the number of available data. A good example of such ^sðt Þ is a linear combination of shifted and scaled gaussoids – a very simple structure suitable for identifying maxima of the interpolated function: ^sðt Þ =

N X

  pn, 1 exp − pn, 2 ðt − pn, 3 Þ2

n=1

where

45 I. Douven, “Abduction and Inference to the Best Explanation”, 2013.

162

8 Context of discovery

pn, 1 , pn, 2 > 0 and pn, 3 > 0 are the parameters to be determined on the basis of the available data in such a way as to satisfy the interpolation conditions: ..., ~s0 = ^sð0Þ, ~s1 = ^sðΔt Þ, ~s2 = ^sð2Δt Þ, ... Otherwise, if no model of the source of the signal sðt Þ is available, one has to try various generalpurpose structures whose approximation power has been confirmed by numerical practice. As a rule, linear combinations of the linearly independent functions ..., ’0 ðt Þ, ’1 ðt Þ, ’2 ðt Þ, ..., called basis functions, are tried first: ^sðt Þ =

X

pn ’n ðt Þ

n

where ..., p0 , p1 , p2 , ... are parameters whose values are to be determined on the basis of the data ..., ~s0 , ~s1 , ~s2 , ... in such a way as to satisfy the interpolation conditions. The abduction-type considerations – taking into account the approximation power of various sets of functions ..., ’0 ðt Þ, ’1 ðt Þ, ’2 ðt Þ, ... and the complexity of calculations – may give priority to the following sets of functions: Y t=Δt − n − the hypothesis H1 n−ν ν, ν≠n  t − nΔt − the hypothesis H2 ’n ðt Þ = B3 Δt  t − nΔt − the hypothesis H3 ’n ðt Þ = sinc Δt

’n ðt Þ ≡

where B3 ðÞ is the elementary cubic B-spline, and sincðt Þ ≡ sinðπt Þ=πt is a function called sinus cardinalis or sinc function. The hypothesis H1 is equivalent to the polynomial interpolation of the data sequence by means of the Lagrange formula, in this case: ..., p0 = ~s0 , p1 = ~s1 , p2 = ~s2 , ...; the hypothesis H2 is equivalent to the cubic-spline interpolation, in this case: the parameters ..., p0 , p1 , p2 , ... are determined by solving a system of linear algebraic equations; the hypothesis H3 is equivalent to the sinc-function interpolation by means of the Whittaker-Shannon formula, in this case: ..., p0 = ~s0 , p1 = ~s1 , p2 = ~s2 , ...

Abduction is a kind of generalisation of the principle of parsimony, also called Ockham’s razor, which was first postulated by the medieval English philosopher and theologian William of Ockham, and concisely expressed by means of a Latin sentence “Entia non sunt multiplicanda praeter necessitatem” (which means “Entities should not be multiplied without necessity”)46. According to this principle, when choosing among alternative hypotheses, one should opt for the simplest among those of them that have a comparable explanatory power, and discard assumptions that do not improve it. The key difficulty, related to the use of Ockham’s razor, is the definition of

46 Most probably this principle was not authored by William of Ockham; cf. E. D. Buckner, “The Myth of Ockham’s Razor”, The Logic Museum, 2006, http://www.logicmuseum.com/authors/other/ mythofockham.htm [2018-07-27].

8.5 Preselection of scientific hypotheses

163

the “measure of simplicity” being a criterion for comparison of alternative hypotheses. Although various measures of simplicity have been put forward as potential candidates, it is generally recognised that there is no such thing as a problem-independent measure of simplicity. In other words, there appear to be as many different measures of simplicity as there are problems to be hypothesised about, and the task of choosing among them appears to be as problematic as the comparison of hypotheses itself. Simplicity is not the only criterion guiding the process of hypothesis selection; the potential application of the selected hypothesis is equally universal and important. This observation is getting clear if the problem of hypothesis selection is reformulated in the language of mathematical modelling as the structural identification of the model satisfying certain cognitive or practical needs. Example 8.7: Structural identification of the mathematical model of a resistor consists in choosing an equation describing the relationship between the electrical current flowing through this resistor (modelled with the scalar variable i) and the corresponding voltage on it (modelled with the scalar variable u). The simplest algebraic structure – u = Ri, where R is resistance – is sufficient for an analysis of the static or low-frequency behaviour of a high-precision resistor. As already mentioned in Example 5.4, or the analysis of the static or low-frequency behaviour of a low-precision resistor, an algebraic nonlinear model may turn out to be necessary, and for the analysis of its highfrequency behaviour, a model having the form of a system of nonlinear ordinary differential equations should be used. Further extension of the model may be justified if the model is expected to reflect not only electrical but also mechanical and thermal phenomena.

The process of hypothesis selection may be automated, and may yield a solution well serving the purpose, provided sufficient amount of a priori information on the class of admissible solutions is available: the more non-redundant information at hand – the better, both in the case of qualitative and quantitative hypotheses. This is a general guiding principle underlying methodology for solving inverse problems, applied for more than 60 years in numerical methods, physics and measurement science, but only recently noted by philosophers of science47.

47 cf., for example, I. Niiniluoto, “Abduction, Tomography, and Other Inverse Problems”, Studies in History and Philosophy of Science, 2011, Vol. 42, pp. 135–139.

9 Context of justification 9.1 Preliminary considerations 9.1.1 Basic concepts Since the times of Plato, knowledge is understood as a justified true belief. So, its practical definition depends on what is considered to be true, and what is considered to be justified. On the one hand, we refer to at least five co-existing definitions of truth (the correspondence, coherence, consensus, evidence and pragmatic definitions of truth – cf. Subsection 2.4.1), on the other – we use various patterns of justification, such as verification, validation, falsification, substantiation, corroboration and proof. In the dictionaries of ordinary English, we may find the following definitions of those concepts: – Verification1 is the act of establishing the truth, accuracy or validity of something (a claim, a statement, etc.). – Validation2 is the act of checking or proving the value, correctness or accuracy of something (a claim, a statement, etc.). – Falsification3 is the act of establishing the falsity of something (a claim, a statement, etc.). – Substantiation4 is the act of showing something (a claim, a statement, etc.) to be true or supported by facts. – Corroboration5 is the act of supporting something (a claim, a statement, etc.) with evidence or authority, or making it more certain. – Proof 6 is the act of testing or making trial of something (a claim, a statement, etc.). Research practitioners use all the above-defined concepts in their publications to make the readers believe in the credibility or importance of their findings. Philosophers of science are, as a rule, more selective: they give priority to some of them and avoid others. Their preferences, as well as the precise definitions of the preferred concepts, depend on their ontological and epistemological orientation. Both research practitioners and philosophers of science seem to more frequently use the term verification than other terms listed above – at least during the last 100 years. This may be attributed, at least to some extent, to the impact of logical positivism on intellectual

1 2 3 4 5 6

from Latin veritas = “truth”. from Latin validus = “strong” or “worthy”. from Latin falsus = “false”. from Latin substantia = “being” or “essence”. from Latin corroborare = “to strengthen” or “to invigorate”. from Latin probare = “to prove”.

https://doi.org/10.1515/9783110584066-009

166

9 Context of justification

life of the twentieth century. Influenced by Vienna Circle, the English philosopher Alfred J. Ayer (1910–1989), in his book Language, Truth and Logic (1936), proposed the so-called verification principle of meaning, according to which a scientific statement is significant only if it is a statement of logic or it could be verified by experience, i.e., if some empirical data can be used to determine its veracity or falsity. Statements that did not meet this criterion (e.g. religious, metaphysical, ethical or aesthetic statements) are not verifiable and consequently – meaningless. The critics of the Ayer’s verification principle point out that it is not analytic (because it cannot be derived from logic) and it is not empirical either (because it cannot be empirically discovered or verified); so, it is meaningless. Thus, the analysis of the verification principle may be used to refute this principle itself. Research practitioners are, as a rule, less strict than Alfred J. Ayer in their understanding of verification, and sometimes even forget about its main weak point – its unreliability implied by inductive reasoning it is based upon. Inferring that a hypothesis is correct because predictions made on the basis of this hypothesis are accurate is a kind of inductive reasoning which can at best provide support for this hypothesis, but it can never show that it is definitely correct7. In the case of non-trivial hypotheses, the finite number of predictions does increase the probability of their correctness, but not to the level of certainty. Another limitation of the verification methodologies is the theory-dependence of observations and measurements: a researcher is actively involved in selection of what is to be observed and measured; therefore, acquired data depend on his understanding of the piece of reality under study – the understanding formed by paradigms and theories considered to be relevant and valid. Despite all the criticism of induction, developed by philosophers of science since the times of David Hume, the research practitioners continue to verify their findings by means of inductive reasoning; the philosophical criticism, however, makes them more sensitive to epistemological limitations of this tool. The criticism directed towards verification applies to validation, substantiation, corroboration and proof, except for the proofs of statements whose contents refer exclusively to mathematical models of reality, and not directly to reality itself. Such statements are quite frequently formulated and discussed in some domains of physics or cybernetics. They are proved by deductive means and, therefore, are “immune” to objections raised against induction. In general, however, inductive verification consists in repetition of experiments aimed at testing a hypothesis, each concluded with the following reasoning: Premise #1: Premise #2: Conclusion #1: Conclusion #2:

∀x : PðxÞ ! QðxÞ PðcÞ ^ QðcÞ PðcÞ ! QðcÞ 9x : PðxÞ ! QðxÞ

[a hypothesis under test] [a piece of evidence] [a statement implied by Premise #2] [existential generalisation of Conclusion #1]

7 R. DeWitt, “Philosophies of the Sciences: A Guide”, 2010, p. 12.

9.1 Preliminary considerations

167

There is no general recipe concerning the number of repetitions; the expectations are domain-specific: in palaeontology, e.g., finding a second bone may be considered as sufficient, while the use of 100 samples in spectrophotometric analysis of food may be viewed as risky. In many domains of technoscientific research, falsification is an important tool of hypotheses testing and selection. It is based on deductive reasoning, and therefore “immune” to the objections raised against inductive verification. The falsification of a hypothesis consists in designing such an experiment which ends with a negative result. The corresponding scheme of deductive reasoning is as follows: Premise #1: Premise #2: Conclusion #1: Conclusion #2:

∀x : PðxÞ ! QðxÞ  ðcÞ PðcÞ ^ Q PðcÞ ! QðcÞ; QðcÞ ) PðcÞ ∀x : PðxÞ ! QðxÞ

[a hypothesis under test] [a piece of evidence] [modus tollens rule] [existential generalisation of Conclusion #1]

The main difficulty, associated with this method of testing, is the uncertainty about the reason of the conclusion that an expected event or phenomenon is not observed: this may be not only the falsehood of the tested hypothesis but also the incorrectness of some auxiliary assumptions, e.g., assumptions concerning the operation of instrumentation used in the experimental setup for acquisition of empirical data8.

9.1.2 Introductory example This subsection is entirely devoted to a problem whose solution has been initiated in Example 8.6. This problem consists in interpretation of the data: . . . , ~s0 ffi sð0Þ, ~s1 ffi sðΔtÞ, ~s2 ffi sð2ΔtÞ, . . . representative of an electromagnetic signal S, under an assumption that it can be adequately modelled with a scalar real-valued function sðtÞ, where t is a scalar realvalued variable modelling time. Three functions of the form: X ^sðtÞ = pn ’n ðtÞ n

have been preselected in Example 8.6 for interpolation of sðtÞ, and consequently three hypotheses concerning the structure of ^sðtÞ – H1 , H2 and H3 – have been preselected for testing on the basis of abduction-type reasoning. In Figure 9.1, the exact shape of sðtÞ is shown together with four examples of interpolating functions

8 ibid., p. 18.

168

9 Context of justification

2

Signal magnitude [au]

1.5 1 0.5 0 –0.5 –1

0

5

10

15

Time [s] Figure 9.1: Reference function sðt Þ (black line) and four interpolating functions obtained on the basis of eight samples of sðt Þ (circles) by means of the Lagrange formula of polynomial interpolation (green line), by means of the cubic-spline interpolation formula – with realistic boundary conditions (blue solid line) and with false boundary conditions (blue dashed line), as well as by means of the Whittaker-Shannon interpolation formula (red line).

for N = 7. All those functions fail to reproduce informative details (i.e. the maxima) of sðtÞ, but two of them – the cubic-spline function, obtained on the basis of realistic boundary conditions, and the function obtained by means of the Whittaker– Shannon interpolation formula – are similar. This is a hint (but not a proof) that H1 should be discarded, i.e., further testing should be confined to H2 and H3 with carefully selected a priori information: the estimates of the boundary values of the first derivative of sðtÞ in case of H2 , and the sampling step Δt is small enough in both cases. If the Fourier spectrum of sðtÞ is a priori known to be zero for the frequencies higher than fMAX , then the Whittaker–Shannon interpolation formula enables one to exactly reconstruct the signal sðtÞ on the basis of an infinite number of its samples, provided the sampling step Δt satisfies the inequality: 2fMAX Δt < 1. In practice, only the signal samples representative of a finite interval of time ½0, NΔt are available, and therefore the accuracy of signal reconstruction depends not only on the value of Δt (and on the selected formula of interpolation) but also on the number of samples N + 1. In the case of H3 , one needs at least 30 samples to obtain an interpolating function reproducing correctly all the maxima of sðtÞ; this is shown in Figure 9.2. The use of the cubic-spline interpolation formula (H2 ) yields only slightly worse results, as it is also shown in this figure. The superiority of H3 over H2 may be easily ascertained if the reference function sðtÞ is available. In practice, however, this is not a case; thus, some other criteria must be applied. When having the possibility of unconstrained increase in N, one

169

9.1 Preliminary considerations

2

Signal magnitude [au]

1.5 1 0.5 0 –0.5 –1 –1.5 0

5

10

15

Time [s] Figure 9.2: Reference function sðt Þ (black line) and the interpolating functions obtained on the basis of 30 samples of sðt Þ (circles) by means of the Whittaker-Shannon interpolation formula (dotted red line) and by means of the cubic-spline interpolation formula – with realistic boundary conditions (blue line).

may monitor the rate of stabilisation of the shape of the interpolating functions corresponding to the consecutive values of N: this rate is higher for H3 than for H2 . Otherwise, one can check the accuracy of signal reconstruction based on the compared interpolating functions. The corresponding procedure for N = 2N ′ would comprise the following steps: – An estimate ^sðtÞ of sðtÞ is obtained on the basis of the subsequence of data   ~s2ν j ν = 0, 1, . . . , N ′ by means of the tested method of interpolation. – The uncertainty of the estimate ^sðtÞ is evaluated using a norm of the difference   ^sðð2ν + 1ÞΔtÞ j ν = 0, 1, . . . , N ′ − 1 between two subsequences: and   ~s2ν + 1 j ν = 0, 1, . . . , N ′ − 1 . The smaller the above-defined measure of estimation uncertainty, the better. In the considered case, it is smaller for H3 than for H2 . Up to now the measurement errors in the data have been neglected. Both numerical experiments, just presented, could be repeated with the data disturbed with pseudorandom numbers modelling random errors. In this way, the propagation of measurement uncertainties on the result of signal reconstruction could be studied using statistical tools. The summary conclusions are as follows: – H1 should be rejected because the interpolating polynomial is drastically changing with the number of available data and diverging from sðtÞ; so, the reliable estimation of the values of sðtÞ between the nodes of interpolation is impossible.

170

9 Context of justification

– If the appropriate a priori information is available, then both H2 and H3 get similar level of confirmation from the data since the approximating functions steadily converge to sðtÞ when the number of available data is increasing; so, in both cases, the estimation of the values of sðtÞ between the nodes of interpolation is possible. – The statistical tests with H2 and H3 show that the results obtained by both methods of interpolation are similarly sensitive to the random errors in the data. Thus, the same data may confirm two different hypotheses H2 and H3 . – Since the approximating functions, obtained by means of the Whittaker–Shannon formula, converge with N to sðtÞ quicker than the corresponding spline functions, the choice of H3 is better justified than the choice of H2 . Moreover, the computation of the parameters . . . , p0 , p1 , p2 , . . . is simpler in the case of H3 than in the case of H2 , and – as shown in Figure 9.1 – the result of interpolation based on H2 is sensitive to the a priori information on the boundary values of the first derivative. A set of hypotheses, analysed in this example, is only a small fraction of hypotheses that could be reasonably taken into account in a real-world case, especially if the machine-learning tools are included into consideration. In some practical cases, the structure of the model to be identified may be derived from the scientific laws describing natural phenomena contributing to the generation of the signal S. In some other cases, the simplest solution may consist in the implementation of universal approximators, such as artificial neural networks that would be trained   using the subsequence of data ~s2ν j ν = 0, 1, . . . , N ′ and validated using the sub  sequence of data ~s2ν + 1 j ν = 0, 1, . . . , N ′ − 1 . If the signal S is periodic or may be detected and sampled more than once, then iterative improvements in the model are possible, based on the increase in the number of non-redundant data.

9.2 Underdetermination and Duhem–Quine thesis The multitude of alternative hypotheses, concerning the problem discussed in the previous section, is not an exception, but a regularity called underdetermination of theories by empirical data. The idea of underdetermination is obvious in linear algebra: the system of N − ΔN equations with N unknowns x1 , . . . , xN has infinitely many solutions for ΔN = 1, . . . , N − 1. Their number may be, however, reduced by adding certain conditions they should satisfy – the conditions derived from the available knowledge about the physical (chemical, biological, technical, etc.) problem modelled by the system of algebraic equations. Some conditions may enable one to choose a single solution.

9.2 Underdetermination and Duhem–Quine thesis

171

Example 9.1: The equation x1 + x2 = 1 has infinitely many real-valued solutions. Their number may be significantly reduced by adding an assumption that both x1 and x2 must be integer-valued. A unique solution, x1 = 0.5 and x2 = 0.5, may be selected by minimising a unimodal criterion, e.g., the function x12 + x22 , derived from energetic considerations.

According to the French philosopher of science Pierre M. M. Duhem and the American philosopher and logician Willard V. O. Quine (1908–2000), it is impossible to test a theory in isolation. One must always add auxiliary hypotheses in order to make testable predictions, in particular – assumptions concerning correct functioning of instruments used for experimentation9. The empirical data are not sufficient to choose among competitive hypotheses. So, one can neither conclusively prove nor falsify a general hypothesis on the basis of the results of observation or measurement10. Example 9.2: Most (arguably all) of the straightforward observational evidence is compatible with both the geocentric model of the solar system proposed by Tyge O. Brahe and with heliocentric model of that system worked out by Johannes Kepler11.

Nothing that we are likely to characterise as a “single scientific theory” – the Newton’s mechanics or Maxwell’s theory of electromagnetism – has any empirical consequences when considered “in isolation”, i.e., without auxiliary assumptions12. Underdetermination of theories by evidence means that the belief in theory is never warranted by the evidence. It is sometimes argued that for any theory we could think of an alternative theory which would entail exactly the same observational consequences under any circumstances13. Example 9.3: Testing the Newton’s theory (of mechanics and universal gravitation) by comparing the predictions about positions of a planet, based on that theory, with the experimental data requires not only the initial position of that planet but also a whole set of other assumptions, including: – the mass of the planet concerned, – the masses of the other bodies in the solar system, – certain characteristics of light propagation between the planet concerned and the telescope used for making observations.14

9 ibid., p. 19. 10 J. Worrall, “Philosophy of Science: Classic Debates, Standard Problems, Future Prospects”, 2002. 11 R. DeWitt, “Philosophies of the Sciences: A Guide”, 2010, p. 19. 12 J. Worrall, “Philosophy of Science: Classic Debates, Standard Problems, Future Prospects”, 2002. 13 S. Psillos, Philosophy of Science A–Z, 2007, p. 254. 14 J. Worrall, “Philosophy of Science: Classic Debates, Standard Problems, Future Prospects”, 2002.

172

9 Context of justification

When trying to falsify a hypothesis, using a piece of experimental evidence E and auxiliary assumptions A1 , . . . , AN , a researcher is confronted with the following logical problem: the result of inference from the conjunction H ^ A1 ^ . . . ^ AN may be inconsistent with E not only if H is false, but also if H is true and at least one of the auxiliary assumptions of A1 , . . . , AN is false. In most situations, in which the evidence E predicted on the basis of H ^ A1 ^ . . . ^ AN does not occur, there are countless explanations, other than the falsity of H, for why the expected event or phenomenon was not observed15. To clarify the situation, the researcher must find independent grounds for believing that A1 , . . . , AN are more likely to be true than H is. If so, then the falsity of inference from H ^ A1 ^ . . . ^ AN would supply good grounds for rejecting H 16. Moreover, it is far from being clear that it is possible to acquire experimental data in a theory-independent way. The results of observations and measurements, when interpreted in light of scientific theories, are said to be theory-laden. More importantly, most of them must be collected within a theoretical context in order to be useful. For example, when one observes an increase in temperature with a thermometer, that observation is based on assumptions about the nature of temperature and its measurement, as well as on assumptions about the principle of the thermometer operation. Measurement errors and conceptual vagueness, which can be reduced indefinitely, but never completely eliminated, exemplify the omnipresent empirical underdetermination of the language that produces observational ambiguity and theoretical pluralism: a plurality of alternative, but empirically adequate, hypotheses could be consistent with the same observation and measurement data17. This means that there are several hypotheses yielding accurate predictions that are alternatives to one another, while having differences that are small enough to be within the range of the measurement uncertainty. Empirical tests are conclusive only when empirical underdetermination, implied by measurement uncertainty, is small enough in comparison to the effect produced in the test18.

9.3 Transformation of hypotheses into scientific knowledge Transformation of hypotheses into scientific knowledge is based on the systematic use of evidence. In everyday language, the latter term19 means an available body of

15 R. DeWitt, “Philosophies of the Sciences: A Guide”, 2010, p. 13. 16 J. Worrall, “Philosophy of Science: Classic Debates, Standard Problems, Future Prospects”, 2002. 17 T. J. Hickey, “Philosophy of Science: An Introduction”, 2016. 18 ibid., p. 100. 19 derived from the Latin adjective evidens = “perceptible” or “obvious” or “apparent”.

9.3 Transformation of hypotheses into scientific knowledge

173

facts or a piece of information or a piece of knowledge being a reason for believing that an assertion or a proposition is true or valid. In science, this may be a result of observation or measurement that can be used to support or discredit a hypothesis20. There are two fundamental types of evidence: qualitative evidence, which consists of descriptive information, and quantitative evidence, which consists of numerical information. At an early stage of hypotheses testing, some auxiliary evidence may be useful, viz.: – classificatory evidence for checking relevance of a given class of empirical information for confirmation of a hypothesis; – comparative evidence for comparing relevance of a given class of empirical information for confirmation of two or more hypotheses. The evidence which is sufficient for confirmation of a hypothesis is called conclusive; otherwise, it is called inconclusive or prima facie21. It should be noted that, in research practice, evidence is used not only for confirmation or rejection of a hypothesis but also for its modification.

9.3.1 Inference to best explanation Inductive reasoning enables the researchers to state that a statement under test is universally true (in the language of realists) or valid (in the language of instrumentalists) if it is confirmed in a limited number of experiments. Although inductive reasoning commonly works (otherwise, almost no technology would be possible), there is no unproblematic explanation for this belief. In 1965, the American philosopher Gilbert Harman (*1938) proposed a methodological framework, called inference to the best explanation22, as an improvement to the method of enumerative induction which enables one to infer from observed regularity to universal regularity. From the fact that each observed A is B, one may infer that all As are Bs or that at least the next A will probably be a B. In practical (non-trivial) cases, one always knows more about a situation than that each observed A is B, and it is a good inductive practice to consider the total evidence. According to Gilbert Harman, the latter may be insufficient to warrant the conclusion obtained by means of inductive inference. It is, however, sufficient if enumerative induction is used as abductive inference tool within the framework of inference to the best explanation. Having several hypotheses which might explain the evidence, one must be able to choose a single one which would

20 S. Psillos, Philosophy of Science A–Z, 2007, p. 82. 21 This is a Latin phrase meaning “at first sight”, which is used to state that upon initial examination, sufficient corroborating evidence appears to exist to support a case. 22 G. H. Harman, “The Inference to the Best Explanation”, The Philosophical Review, 1965, Vol. 74, No. 1, pp. 88–95.

174

9 Context of justification

provide a “better” explanation for the evidence than any other of them. The key question is how to judge that one hypothesis is sufficiently better than the others. The answer provided by Gilbert Harman, is far from being precise: “Presumably such a judgment will be based on considerations such as which hypothesis is simpler, which is more plausible, which explains more, which is less ad hoc, and so forth”. The undisputable advantage of inference to the best explanation is that it exposes background knowledge (characteristics of the situation, auxiliary assumptions or lemmas, etc.) while an approach based on enumerative induction disguises it. Example 9.4: How to explain wet grass in the morning? Rain could be given as the best explanation for wet grass in British Columbia, but it might not be the best explanation in Arizona at the peak of the dry season, where a hosepipe or irrigation system might be the best explanation for wet grass. Other pieces of knowledge, useful for the best explanation, could be related to the climate in the geographical area, the time of day and other factors, such as an observation that the grass is wet, but the streets are dry.23

Often, researchers are confronted with a number of rival hypotheses which are at the same time consistent with the available data. Then the data alone, although necessary, are insufficient to warrant a choice among such hypotheses; an additional criterion is needed. Explanatory power of the rival hypotheses is a practical solution, according to which one may choose a hypothesis which is best explaining the available data24. Various features have been proposed for ceteris paribus25 comparison of the explanatory power of alternative hypotheses concerning the same subject-matter. The hypothesis H1 can be said to have greater explanatory power than the hypothesis H2 if it is: – explaining more facts or experimental data, in particular – more “surprising” facts, – providing more detailed specification of relevant causal relations, – leading to higher accuracy and precision of the description of the subjectmatter, – offering greater predictive power, – requiring fewer assumptions, – being more susceptible to falsification. The status of inference to the best explanation is controversial26. A major objection is that it enables the researcher to choose the optimum hypothesis among a limited

23 inspired by “Inference to the Best Explanation”, Thе Information Philosopher, http://www.infor mationphilosopher.com/knowledge/best_explanation.html [2017-07-18]. 24 I. Douven, “Abduction and Inference to the Best Explanation”, 2013. 25 This is a Latin phrase meaning “all other things being equal”. 26 I. Douven, “Testing Inference to the Best Explanation”, Synthese, 2002, Vol. 130, pp. 355–377.

9.3 Transformation of hypotheses into scientific knowledge

175

subset of all possible hypotheses consistent with the evidence at hand, without any guarantee concerning its relationship to the optimum optimorum. The most-cited idea for defending the inference to the best explanation is an empirical argument proposed by the American philosopher of science Richard N. Boyd. It is pointing out to the productivity of the research methodology which comprises methods for designing experiments, assessing data and choosing between rival hypotheses by means of that kind of inference. The unusual progress of science must be at least to a certain extent attributable to the said methodology; therefore, the inference to the best explanation must be a reliable tool of research.27 The eliminative abduction, as described by the British philosopher of science Alexander Bird (*1964)28, may be viewed as a special case of inference to the best explanation which is free from some drawbacks of its general version. The scheme of the corresponding reasoning is as follows: – H0 , H1 , . . ., HN are the only hypotheses that could explain the piece of evidence E; – H0 , H1 , . . ., HN − 1 have been falsified, therefore HN explains E.

Example 9.5: The acquired immune deficiency syndrome (AIDS) is a spectrum of pathological conditions caused by infection with the human immunodeficiency virus (HIV). A person infected with HIV may first experience only an influenza-like symptoms. Sometimes later, however, his immune system is getting deficient: various infections, including tuberculosis, appear, as well as tumours that rarely affect people who have well-functioning immune systems; this stage of AIDS advancement is often also associated with weight loss. Alexander Bird provided the following reconstruction of the study aimed at identification of the causes of AIDS29. In the early 1980s, several medical reports were published concerning the co-appearance of two diseases in ca. 50 young Californian homosexual men, viz.: – pneumonia Pneumocystis that had otherwise only been observed in individuals who had undergone medical therapies involving immune suppression; – Kaposi’s sarcoma, a form of skin cancer, normally found only in elderly men of Mediterranean origin. Then the search for a common underlying factor of this syndrome (today called AIDS) started; the following hypotheses were taken into account: – H0 : there is no common factor, the co-appearance of two diseases is entirely accidental; – H1 : excessive use of certain recreational drugs might depress the immune system; – H2 : very high incidence of sexually transmitted diseases among sexually very active men might overload the immune system and cause it to fail; – H3 : infection by a bacterium, probably hitherto unknown, might be the cause; – H4 : infection by a virus, probably hitherto unknown, might be the cause.

27 I. Douven, “Abduction and Inference to the Best Explanation”, 2013. 28 A. Bird, “Eliminative Abduction: Examples from Medicine”, Studies in History and Philosophy of Science, 2010, Vol. 41, pp. 345–352. 29 ibid.

176

9 Context of justification

The number of cases of rare symptoms, often overlapping, all related to an impoverished immune system, and in many cases found amongst homosexual men, was the basis for exclusion of the null hypothesis H0 . The key piece of evidence which refuted the lifestyle-related hypotheses (H1 ) and ðH2 Þ was the identification of AIDS among haemophiliacs and among people (also women) who had received blood transfusions; among the donors of the blood were individuals who developed AIDS within a year after blood donating. The analysis of such evidence enabled the researchers to focus on the hypotheses related to blood-borne infection (H3 and H4 ), and to exclude the hypotheses H1 and H2 from further considerations. The elimination of the hypothesis H3 was based on the observation that the blood product used by haemophiliacs, the clotting agent factor VIII, is obtained from donated blood by a process that involves filtration removing bacteria.

It should be noted that, since justification based on inference to the best explanation refers to “explanation”, many ideas presented in Chapter 7, directly apply to the subject-matter of this subsection.

9.3.2 Scientific evidence and confirmation Scientific evidence is evidence which serves to either support or counter a scientific theory or hypothesis. Such evidence is expected to be empirical evidence, i.e., evidence originating in observation or measurement. It is said that a piece of evidence E absolutely confirms some hypothesis H if the probability of H given E, i.e., PrðHjEÞ is greater than a fixed threshold value pth 2 ð0, 1Þ; accordingly, E is evi Relative confirmation, in contrast, is an dence for H only if E is not evidence for H. incremental confirmation: a piece of evidence E confirms a hypothesis H if the

 ; accordingly, relative probability PrðHjEÞ is greater than the probability Pr HjE confirmation of a hypothesis H by a piece of evidence E if it increases its probability, no matter by how little30. The standards for scientific evidence vary according to the field of study, but the strength of scientific evidence is, as a rule, assessed by means of statistical analysis. Two perfectly rational researchers may draw different conclusions from the same scientific evidence because of different background beliefs. The concept of scientific confirmation31 is understood as stating or showing that something (a claim, a statement, etc.) is true, or valid, or correct. So, it is, up to certain extent, covering the contents of the concepts of verification, validation, falsification, substantiation, corroboration and proof – so, the actions performed by researchers transforming hypotheses into knowledge. The term confirmation is used in philosophy of science when evidence E (e.g. a result of

30 S. Psillos, Philosophy of Science A–Z, 2007, p. 43. 31 from Latin confirmare = “to firm” or “to strengthen”.

9.3 Transformation of hypotheses into scientific knowledge

177

observation or measurement) supports a scientific hypothesis H, i.e., when E makes H more plausible; when, e.g., the positive result of allergy testing for birch pollens confirms the hypothesis that a tested person has the allergy to those pollens. Example 9.6: When visiting a distant island during his summer holidays, a biologist (let’s call him “researcher”) took a photo of an unknown black bird (let’s call it “BB”). On his return home, by means of an extensive database of birds, he confirmed his hypothesis that ornithologists had not yet identified and studied such a bird. Further steps of his investigation were as follows: – During the next three years, the researcher visited the island twice a year, and took ten photos of BBs. Since all of them where black, he formulated the hypothesis that all the BBs on the island are black. – During several consecutive visits on the island, he managed to make additional 90 photos of BBs because he found a place on the island where they had appeared more frequently. One of the BBs had all the features of other 99 birds, but it was white. – By means of sophisticated software for pattern recognition, the researcher analysed the photos, and concluded that they represent only 50 different items of the BB (including a single white item). He formulated the hypothesis that the population of the BBs on the island is oscillating around 50 items, and ca. 2% of them are albino. – The photos, taken during consecutive expeditions to the island, confirmed that the abovementioned quantitative characteristics of the BBs population are relatively stable in time. Of course, the statistical significance of the confirmation concerning the share of albino items was much lower than that of the estimate of the total population.

In the above example, although the objects of investigation (birds) were directly observable, some of their features were not; therefore, a photo camera, combined with a system of image analysis, was necessary to detect and identify those features. The instrumental support enabled the researcher to interpret the photos as images of an unknown species, and therefore to correctly assess the population of the birds. The confirmation of a non-trivial hypothesis relies on instrumental support enabling researchers to get information about entities and phenomena which are not directly accessible for our senses. The confirmation procedure, applied in Example 9.6, is of purely inductive nature: the consecutive photos of the birds under study increase the plausibility of the hypothesis that all of them are black, while an exceptional photo of a white item is demonstrating that an anomaly should be included in the body of knowledge about the newly identified species of birds. On the basis of that knowledge, one can predict with the probability of ca. 98% that the next observed representative of that species will be black. Many chapters of modern science are devoted to the investigation of objects or phenomena that cannot be directly observed. The introductory example, presented in Section 9.1, is showing some complications which appear in such situations. The experience of numerical exercises, described in this example, may be generalised on a broad class of problems related to hypothesis testing and scientific explanation. In particular, this experience may facilitate understanding of the procedures of verification

178

9 Context of justification

which consist in checking whether the predictions, implied by a tested hypothesis, are in agreement with the relevant observations and measurements which ultimately reduce to observations made by the unaided human senses: sight, hearing etc. To be included in the body of knowledge, a hypothesis under test should be verified by a number of impartial, independent and competent researchers who – first of all – should agree on what is observed and measured. The observations and measurements should be repeatable, i.e., experiments that generate them should be specified in sufficient detail as to enable those researchers to reproduce them. Moreover, predictions should be specific enough to enable the researchers to design an experiment that could falsify the hypothesis under test that implies those predictions.

9.3.3 Bayesian confirmation The Bayesian approach to confirmation is based on the interpretation of probability as a measure of the degree of subjective belief of the researcher, concerning the veracity or validity of a hypothesis under test, rather than a measure of the degree of its objective confirmation. A mechanism for modification of probabilities, interpreted in this way, aimed at taking into account new pieces of empirical evidence, is based on the iterative use of the Bayes formula:

PrðHjEÞ =

PrðEjH Þ PrðEjH Þ

PrðH Þ =  PrðH Þ PrðEÞ PrðEjH Þ + Pr EjH

where PrðEjH Þ is the conditional probability of the piece of evidence E provided the

 is the conditional probability of E provided H is false, hypothesis H is true, Pr EjH and PrðH Þ is a priori probability of H. The latter may be determined in an arbitrary way, as long as it is consistent with the axioms of probability, while the probabili

 should be determined on the basis of the semantic or ties PrðEjH Þ and Pr EjH mathematical models of the relationship between the hypothesis H and the evidence E. The information on that model and other background knowledge (let’s denote it with K), necessary for effective use of the Bayes formula, is sometimes explicitly introduced in this formula:

PrðHjE ∩ K Þ =

PrðEjH Þ PrðHjK Þ PrðEjK Þ

The result of the application of the Bayes formula, PrðHjEÞ, is an a posteriori estimate of PrðH Þ. The piece of evidence E: – confirms or supports the hypothesis H if PrðHjEÞ > PrðH Þ, – is neutral with respect to H if PrðHjEÞ = PrðH Þ,

9.3 Transformation of hypotheses into scientific knowledge

179

– disconfirms or undermines H if PrðHjEÞ < PrðH Þ:32 The probability PrðHjEÞ may play the role of PrðH Þ in the next round of confirmation, referring to a new piece of evidence. When introducing two consecutive pieces of evidence, E1 and E2 , into the Bayesian procedure, one should take into account their possible dependence which means that: PrðE1 ∩ E2 Þ = PrðE1 Þ  PrðE2 jE1 Þ > PrðE1 Þ  PrðE2 Þ If E1 and E2 are necessary consequences of H, then PrðE1 ∩ E2 jH Þ = 1, and: PrðHjE1 ∩ E2 Þ =

1 1 1  PrðH Þ =   PrðH Þ PrðE1 ∩ E2 Þ PrðE1 Þ PrðE2 jE1 Þ

Thus, the confirmative power of E2 is less important than that of E1 . This applies to any sequence of pieces of evidence which are not independent. The above-described mechanism of probability modification, PrðH Þ ! PrðHjEÞ, called rule of conditioning, is clear and unambiguous, but some doubts may be provoked by the fact that different researchers may start the procedure using different a priori probability PrðH Þ and, consequently, obtain different results. It turns out, however, that after repeated use of the Bayes formula the differences in the assessment of PrðHjEÞ decreases to zero. Unfortunately, in practice, sufficient empirical evidence is not always available for this type of exercise. Another limitation of the Bayesian approach to confirmation is related to the need for knowledge acquired by means of induction, i.e., the need for some semantic or mathematical models relating the hypothesis H and the piece of evidence E: they should be identified during the earlier research stages, preceding the tests over the hypothesis H. Thus, the Bayesian approach is not a tool for testing universal hypotheses for which the models of the relationship between H and E are not quantified. It is, however, an indisputably advantageous tool for solving inverse problems of applied sciences where

 and PrðH Þ – is the statistical estimation of all the probabilities – PrðEjH Þ, Pr EjH not only possible but also highly objectivised. In summary, the following methodological aspects of the Bayesian approach are worth being stressed: – By the direct introduction of a priori probabilities, expressing intuitive convictions of the researcher, it emphasises his role in the research process. – At the same time, the mechanism for modification of probabilities, aimed at taking into account new pieces of empirical evidence, is insuring the objectivisation of the final conclusions.

32 C. Howson, P. Urbach, “Bayesian versus Non-Bayesian Approaches to Confirmation”, [in] Philosophy of Probability: Contemporary Readings (Ed. A. Eagle), Routledge, Abingdon (UK) – New York 2010.

180

9 Context of justification

– By an organised used of various pieces of evidence, accompanied by the evaluation of their significance, the role of induction in developing science is accepted and even explicitly defended against its critics. – By randomisation of the non-random models of the studied phenomena and, in particular, of the relationship between pieces of evidence and hypotheses under test – the Bayesian approach may be applied to non-probabilistic areas of science; this is quite straightforward in empirical sciences where measurement and observation uncertainty is omnipresent and may be adequately modelled by means of random variables and operators. The Bayesian approach to confirmation has had a considerable impact on epistemology. It has inspired, in particular, numerous attempts to find certain measures of evidence. The most discussed among them are the following: – the difference measure PrðHjEÞ − PrðH Þ, Þ – the log-ratio measure logðPrPrðHjE ðH Þ Þ,

 , – the counterfactual difference measure PrðHjEÞ − Pr HjE 33 – the log-likelihood ratio measure log PrðEjHÞ  . PrðEjHÞ

ð

Þ

Each of them has its advantages and drawbacks; neither of them has, therefore, received general recognition. The Bayesian approach has also inspired certain attempts to find coherence measures, i.e., indicators of coherence among various pieces of knowledge. For two such pieces, A and B, the most discussed coherence measures are the following: – the Shogenji measure PrPrððAAÞ ∩PrðBBÞÞ, being mainly sensitive to the relevance of A and B; PrðA ∩ BÞ – the Glass-Olsson measure Pr ðA ∪ BÞ, being mainly sensitive to the overlap of A 34 and B: The Bayesian approach to confirmation has given also a new interpretation to the idea of falsification. The modern scientific theories, which often make claims far beyond what can be directly observed, resist a falsified-unfalsified dichotomy: “trust in a theory often falls somewhere along a continuum, sliding up or down between 0 and 100 percent as new information becomes available”35. This means that hypotheses are accepted or rejected depending on the level of uncertainty associated with the process of their confirmation. It should be stressed that there is no generally applicable numerical threshold of acceptance or rejection: the uncertainty of 10%

33 S. Hartmann, J. Sprenger, “Bayesian Epistemology”, November 29, 2016. 34 ibid. 35 N. Wolchover, “A Fight for the Soul of Science”, Quanta Magazine, December 16, 2015, https:// www.quantamagazine.org/20151216-physicists-and-philosophers-debate-the-boundaries-of-sci ence/ [2017-05-26].

9.3 Transformation of hypotheses into scientific knowledge

181

may be not an obstacle to accept a new piece of knowledge in psychology, while in high-frequency engineering 10 − 10 % may not be sufficiently low. On the one hand, the Bayesian approach to confirmation is demanding a lot of quantitative information when applied to specific practical problems; on the other, however, it is an intellectually attractive framework for conceptual (qualitative) analysis of some related methodological problems. Example 9.7: The Paradox of the Ravens (cf. Section 4.4) may lose its paradoxical status if viewed from the Bayesian perspective. The observation of a black raven (the evidence RB) strongly confirms the hypothesis that all ravens are black because its probability is relatively low in comparison to  B).  Therefore, the conthe probability of seeing any other object which is not black (the evidence R   firmation of the hypothesis by the evidence RB is non-zero but significantly weaker. The authors of the article “Bayesian versus non-Bayesian approaches to confirmation”, which was first published in 1989, have demonstrated by means of simple mathematical manipulations, including the Bayes B  is negligible.36 formula, that under realistic assumptions the confirmation power of the evidence R

In the times of omnipresent quantophrenia, one may be tempted to apply Bayesian tools for quantitative characterisation of the degree of individual or collective belief associated with various pieces of scientific knowledge. Such attempts must be frustrated because of the lack of relevant and sufficiently rich data. Bayesian tools may yield meaningful quantitative results, provided appropriate empirical information for evaluation of probabilities involved is available. The challenge of this demand is well known from applied research referring to such tools. Example 9.8: The rapid increase in computational power that could be employed in research practice was an important premise of upswing of interest in Bayesian tools, which started in the middle of the 1990s. An area of potential applications is production control and monitoring where the necessary data may be acquired by recording the quantitative history of production. The statistics of measured parameters of a long series of items of a manufactured product may be used for reliable estimation of the a priori probabilities characterising scattering of those parameters, caused by the technological imperfections. The author’s articles concerning spectrophotometric analysis of food products37 show how sensitive are Bayesian algorithms to the quality of information acquired in this way.

On top of the problem of data insufficiency, more fundamental problems, related to the use of probabilities in the confirmation procedures, remain unresolved. The Dutch

36 C. Howson, P. Urbach, “Bayesian versus Non-Bayesian Approaches to Confirmation”, 2010. 37 C. Niedziński, R. Z. Morawski, “Estimation of Low Concentrations in Presence of High Concentrations Using Bayesian Algorithms for Interpretation of Spectrophotometric Data”, Journal of Chemometrics, 2004, Vol. 18, pp. 217–230; R. Z. Morawski, C. Niedziński, “Application of a Bayesian Estimator for Identification of Edible Oils on the Basis of Spectrophotometric Data”, Metrology and Measurement Systems, 2008, Vol. 15, No. 3, pp. 247–266.

182

9 Context of justification

philosopher of science Theo A. F. Kuipers – after 25-pages-long analysis of various options discussed in the relevant literature – has concluded that “[. . .] we are far from a final answer to the question how to use probabilities, and measures of confirmation, information and content in the contexts of explanation and generalization”38.

9.4 Research methods and research methodology A brief discussion of research methods and methodologies appears in this chapter devoted to the context of justification because the quality of justification critically depends on the quality of those methods and methodologies, and on “due diligence” in their implementation. This quality will, in particular, directly influence the efficacy of checking the auxiliary assumptions A1 , . . ., AN – whose role is discussed in Section 9.2 – as well as the reliability of probabilistic evaluation of the Bayesian approach to justification is based upon.

9.4.1 Basic concepts In English, two related although different meanings are associated with the term methodology: on the one hand, it is understood as the system of methods and principles used in a particular discipline of science or art; on the other – as the branch of philosophy concerned with the study of methods and procedures used in science (or in a philosophical system)39. In some languages, there is no such ambiguity because two different terms are associated with those two meanings, e.g., in Polish metodyka means the system of methods and principles, while metodologia is the name for the branch of philosophy, focused on the scientific method; a similar distinction exists in German40. To avoid ambiguity, the philosophical study of the scientific method is called in English philosophy of science and the system of methods and principles used in a particular discipline of science (or group of disciplines, such as empirical sciences) is referred to as research methods and research methodology.

38 T. A. F. Kuipers, “Inductive Aspects of Confirmation, Information and Content”, [in] EPRINTSBOOK-TITLE 2006, University of Groningen, pp. 855–883. 39 cf., for example, Random House Dictionary, Random House, Inc. 2017; Collins English Dictionary – Complete & Unabridged, 2012 Digital Edition. 40 die Methodik = “die Gesamtheit der Techniken der wissenschaftlichen Vorgehensweisen”; die Methodologie = “die Theorie bzw. Lehre von den wissenschaftlichen Methoden”, cf. http://www.uni vie.ac.at/ksa/elearning/cp/ksamethoden/ksamethoden-30.html [2017-02-09].

9.4 Research methods and research methodology

183

According to C. R. Kothari, research methods may be understood as all those methods (techniques) that are used for conducting research, i.e., the methods the researchers use in performing research operations. The research methods can be classified into the following groups: – the methods concerned with the acquisition of data; – the statistical techniques which are used for establishing relationships between the data and the unknowns; – the methods which are used to evaluate the accuracy of the results obtained41. The research methodology is a way to scientifically solve the research problem. It includes research methods and techniques, but it is not limited to them; it contains also the logic behind them, the assumptions underlying them, as well as the criteria enabling the researcher to decide which of those methods and techniques are applicable to a given research problem. A scientist has to expose the research decisions to evaluation before they are implemented and provide their justification. He has to explain why a research study has been undertaken, how the research problem has been defined, in what way and why the hypothesis has been formulated, what data have been collected and what particular method has been adopted, why particular technique of data analysis has been used etc.42

9.4.2 Criteria of good research The principal qualities expected of a good technoscientific research activity are the following: – It is systematic, i.e., it is structured with specified steps to be taken in a specified sequence in accordance with the well-defined set of rules. – It is open to intuitive tricks and other heuristic methods at the stage of generation of ideas, but it is excluding their use at the stage of arriving at conclusions. – It is guided by the rules of logical reasoning, regardless of whether deduction, induction or abduction is most appropriate at a given stage of research. – It is replicable, i.e., its results may be checked by repeating each stage of the study.43 The following requirements should be satisfied (as a necessary but not sufficient condition) to provide technoscientific research with the above-listed qualities:

41 C. R. Kothari, Research Methodology: Methods and Techniques, New Age Int. Pub., New Delhi 2004, p. 8. 42 ibid. 43 cf. ibid., pp. 20–21.

184

9 Context of justification

– The purpose of the research project should be clearly defined using common concepts. – The research procedure to be applied should be described in sufficient detail to enable other researchers to repeat it for its scrutiny and further advancement. – The research process should be carefully planned to avoid unnecessary profligacy. – The research results, as well as the whole research procedure, should be reported with complete frankness, disclosing all identified sources of uncertainty those results may be subject to. – The validity and reliability of primary data should be checked carefully, as well as the adequacy of the methods used for their analysis and processing. – The reported conclusions should be confined to those justified by the available data and research results derived from those data.44

9.4.3 Contents of research methodology The methodological aspects of research, discussed in this book, refer to the scientific method as it is understood in philosophy of science. Since this book is mainly addressed to the Ph.D. students of technoscientific disciplines, it is assumed that they have already sufficient understanding of general research methodology and of some principal methods of investigation used in their discipline or field of specialisation. For their guidance, in this section, an inventory of those issues of general research methodology is provided, which can be considered as a kind of “common denominator” of the sources (mainly handbooks and encyclopaedias) devoted to this topic. As a rule, the latter provide some basic information on philosophy of science, followed by the consideration of such general issues as meaning, objectives, significance and motivation of research; the types of research and research approaches; the structure of the research process, and the criteria of research evaluation. Typical chapters of such sources cover the following issues: – formulation of a research problem (in particular, criteria for problem selection, patterns for problem specification); – research planning (in particular, logistic and financial preparation of a research project, methods for designing experiments); – sampling design (in particular, methods for systematic and random sampling, criteria for selecting a sampling procedure); – discipline-specific measurement techniques (in particular, specification of typical measurands and dedicated instrumentation, sources of measurement errors, methods for evaluation of measurement uncertainty);

44 ibid., p. 20.

9.4 Research methods and research methodology

185

– discipline-specific techniques for acquisition of qualitative data (in particular, techniques for recording results of observation, methods for collecting data by means of interviews, questionnaires and schedules); – data processing (in particular, typical methods for data processing, advanced techniques for data exploration); – statistical testing of hypotheses (in particular, interpretation of basic concepts underlying testing, discipline-specific examples of testing); – interpretation of research results (in particular, the motivation of interpretation, techniques of interpretation, precaution measures in interpretation); – reporting research results (in particular, typology of reports, significance of reporting, guidelines for reporting); – using computers in research (in particular, numerical and non-numerical tasks of computers, evaluation of accuracy and reliability of computation).

10 Uncertainty of scientific knowledge 10.1 Preliminary considerations This chapter is devoted to the enhancement and integration of information concerning various sources of uncertainty of scientific knowledge – the source already identified in the previous chapters. The importance of this issue is closely related to the risk of using uncertain knowledge for solving practical problems, in particular, the risk involved in using technoscientific knowledge for solving problems in areas such as: healthcare, expert testimony, environmental policies or science education1. The risk associated with the latter area may be much retarded but extremely detrimental due to the number of people exposed to institutional education. It follows from the historical overviews of science (Chapter 3) and philosophy of science (Chapter 4) that almost every element of the scientific method is questionable or problematic. A recently published French book2 shows that even the concept of questioning is not evident: requires analysis, interpretation and identification of uncertainties related to its use. Apparently, however, the progress of science – measured at its output: quantity of knowledge – is not slowing down. Such inconsistency may be at least partially explained by the fact that the contemporary research practice is dominated by the pragmatic approach to science. This does not mean that other approaches proposed by philosophers of science are useless: they generate valuable warning signals addressed to the scientific communities (or even of society in general) about various sources of uncertainty related to generation of technoscientific knowledge. According to Aristotle, a claim is scientific if it is general or universal, absolutely certain and having explanatory potential (understood in terms of cause-effect relationships); the seventeenth-century natural philosophers – including Galileo Galilei, René Descartes and Isaac Newton – still required virtual certainty for a claim to belong to the body of scientific knowledge3. Further developments of science and philosophical reflection on the scientific method have led, however, to a conclusion that the requirement of certainty cannot be met with respect to scientific knowledge, and that qualitative and

1 S. O. Hansson, “Science and Pseudo-Science”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Summer 2017 Edition, https://plato.stanford.edu/archives/sum2017/entries/ pseudo-science/ [2017-08-10]. 2 M. Meyer, Qu’est-ce que le questionnement, Editions Vrin, Paris 2017. 3 T. Nickles, “The Problem of Demarcation: History and Future”, [in] Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Eds. M. Pigliucci, M. Boudry), University of Chicago Press, Chicago 2013, pp. 101–120. https://doi.org/10.1515/9783110584066-010

188

10 Uncertainty of scientific knowledge

quantitative evaluation of its uncertainty is an equally important duty of researchers as their endeavours to make knowledge as certain as possible (basic research) or sufficiently certain for its practical applications (in applied research). According to epistemological naturalism, knowledge does not require certainty; reliable belief-forming processes are sufficient for yielding knowledge4. Philosophers of science have thus devoted a considerable amount of effort to identify methodologies that are effective in producing such reliable knowledge. In contrast, some philosophers have denied that science does actually produce a privileged body of knowledge and have argued that all scientific knowledge is a product of its historical and social context5. It should be noted that the concept of certainty (and, consequently, that of uncertainty) is understood here in the epistemological (quasi-logical) sense: a belief is certain if it cannot be doubted for logical rather than psychological reasons.

10.2 Critical versus naïve understanding of scientific method A vast majority of students in technoscientific disciplines are introduced by their teachers to the art of empirical research following the following pattern: – An objective (i.e. impartial and unbiased) researcher makes observations and measurements concerning a selected object of study. – On the basis of the results of observations and measurements, the researcher works out a theory that is able to explain those results and to enable prediction of the future states and behaviours of the object of study. – On acquiring additional results of observations and measurements, which are not coherent with the already accepted theory, the researcher corrects this theory (possibly, in an iterative process). It is clear, in light of considerations presented in Chapters 5–9, that this is a naïve understanding of the scientific method because: – The object of study and the scope of its observation (i.e. a set of aspects taken into account) is the consequence of an arbitrary decision of the researcher or, more frequently, a derivative of the policies of granting institutions. Moreover, the state of the researcher’s mind (his educational background, accumulated research experience, cognitive expectations, etc.) significantly influence his “way of seeing” the object of study, the related research problem and applied research methodology.

4 S. Psillos, Philosophy of Science A–Z, 2007, p. 37. 5 P. W. Humphreys, “Science, Philosophy of”, 1995.

10.3 Demarcation problem

189

– The description of observations is “theoretically biased” because the researcher’s language depends on his theoretical knowledge. Generalisations, based on empirical data, are always uncertain because induction is incomplete and measurement data are uncertain. Theoretically biased description of observations and uncertain generalisations may bring about alternative hypotheses equally well explaining the available data. – On acquiring new data which are not coherent with the formulated theory (after its formulation), the researcher – rather than correcting his theory – is adding new constraints on the scope of its applicability. If the set of (accumulated) constraints is drastically reducing its field of applicability, the theory is abandoned. The discrepancy between the naïve pattern of the scientific method and real research practice, varying significantly from one empirical discipline to another, is a source of fundamental uncertainty of scientific knowledge. The following section will expose its nature in more details.

10.3 Demarcation problem The uncertainty in the understanding of the scientific method implies uncertainty in distinguishing between science and non-science, and consequently between scientific and non-scientific knowledge. It should be noted that the term non-science is very broad: it encompasses not only pseudoscience and fraudulent science but also philosophy and other humanities. This section is mainly focused on the demarcation between science and pseudoscience while the demarcation between science and fraudulent science is left for in-depth treatment in the chapters devoted to research ethics. Definition of pseudoscience. The statement “Pseudoscience is like pornography: we cannot define it, but we know it when we see it”6 is well characterising the nature of difficulty of defining the concept of pseudoscience. The prefix pseudo- (of Greek origin), as a rule, indicates something false or deceiving. Pseudoscience is, therefore, understood as non-science pretending to be science. The term pretending means that the proponents and supporters of a system of pseudoscientific claims try to create an impression that it is scientific or that it represents the most reliable knowledge on its subject-matter7.

6 R. J. McNally, “Is the Pseudoscience Concept Useful for Clinical Psychology?”, The Scientific Review of Mental Health Practice, 2003, Vol. 2, No. 2, http://www.srmhp.org/0202/pseudoscience.html [2018-07-23]. 7 cf. S. O. Hansson, “Science and Pseudo-Science”, 2017.

190

10 Uncertainty of scientific knowledge

Example 10.1: There is a number of the systems of beliefs (ideas) which are generally perceived by scientific communities as pseudoscientific, e.g. alchemy, astrology, iridology or phrenology. There is, however, considerable number of such systems of beliefs whose status is less clear: they are perceived as problematic, but not definitely unscientific. These are, in particular, systems of beliefs associated with some health-related practices – such as acupuncture, chiropractic or homeopathy – whose healing efficacy has been documented with a large number of cases.8

Concept of demarcation. Karl R. Popper, who introduced the term demarcation problem, considered the distinction between science and non-science as the central question of philosophy of science. On the other hand, many other philosophers of the twentieth century regarded it as unimportant or unsolvable. Paul K. Feyerabend, in his famous 1975 book Against Method, argued that a distinction between science and non-science is neither possible nor desirable9. Larry Laudan concluded in 1983 that it is impossible to specify necessary and sufficient conditions for separating science from non-science10. The American philosopher of language and mathematics Alexander George (*1957) was even more radical when writing: Science employs the scientific method. No, there’s no such method [. . .]. Science can be proved on the basis of observable data. No, general theories about the natural world can’t be proved at all. Our theories make claims that go beyond the finite amount of data that we’ve collected. There’s no way such extrapolations from the evidence can be proved to be correct. Science can be disproved, or falsified, on the basis of observable data. No, for it’s always possible to protect a theory from an apparently confuting observation. Theories are never tested in isolation but only in conjunction with many other extra-theoretical assumptions (about the equipment being used, about ambient conditions, about experimenter error, etc.). It’s always possible to lay the blame for the confutation at the door of one of these assumptions, thereby leaving one’s theory in the clear.11

He concluded that we should rather focus on differentiation of good science from poor science, i.e., on the claims about reality that we have strongest reason to believe are true. It seems, however, that the recent years have witnessed a renewed interested in the demarcation problem12. Modern approaches to this problem are less formal than traditional solutions; their common feature is concentration on a small number of necessary and jointly sufficient conditions, including not only methodological but also 8 The definitions of acupuncture, alchemy, astrology, chiropractic, homeopathy, iridology and phrenology may be found in the corresponding Wikipedia articles. 9 P. Feyerabend, Against Method: Outline of an Anarchist Theory of Knowledge, Verso, London – New York 1993 (first published in 1975). 10 L. Laudan, “The Demise of the Demarcation Problem”, [in] Boston Studies in the Philosophy of Science (Eds. R. S. Cohen, L. Laudan), Vol. 76, D. Reidel, Dordrecht 1983, pp. 111–127. 11 A. George, “What’s Wrong with Intelligent Design, and with Its Critics”, The Christian Science Monitor, December 22, 2005, https://www.csmonitor.com/2005/1222/p09s02-coop.html [2017-09-05]. 12 M. Pigliucci, M. Boudry (Eds.), Philosophy of Pseudoscience: Reconsidering the Demarcation Problem, University of Chicago Press, Chicago – London 2013.

10.3 Demarcation problem

191

cognitive, psychological and sociological elements13. The Swedish philosopher Sven O. Hansson (*1951) concluded his 2017 summary of the state of the art in demarcation studies with the statement: “. . . there is still much important philosophical work to be done on the demarcation between science and pseudoscience”14. Criteria of demarcation. Karl R. Popper appealed to falsifiability as the only criterion of demarcation. According to his view, a scientific theory is falsifiable if it entails empirically testable predictions which can be used as premises for the rejection of that theory (or its correction). Despite the multi-aspectual criticism, referring mainly to the Duhem-Quine thesis, the falsifiability still remains the most appreciated criterion of demarcation among research practitioners and some philosophers of science15. Taking into account that the Popper’s criterion, when applied to isolated hypotheses or theories, is neither necessary nor sufficient condition for demarcation between science and pseudoscience, Imre Lakatos suggested to apply it to the whole research programme that is characterised by a series of theories successively replacing each other. In his view, a research programme is progressive if each consecutive new theory, developed within that programme, has a larger empirical content than its predecessor; if a research programme is not progressive, then it is pseudoscientific16. By the end of the 1970s, the Canadian philosopher of science Paul R. Thagard proposed to supplement the Lakatos’ criterion with sociological indicators based on a collective behaviour of a scientific community. According to him, the research activities of that community are pseudoscientific if, over a long period of time, it is tolerating many unsolved problems: – makes little attempt to develop a new theory towards solutions of these problems, – shows no concern for attempts to evaluate a favourite theory in relation to others, – is selective in considering confirmations and disconfirmations, – does not apply appropriate safeguards (such as peer review, blind or doubleblind studies, control groups) against self-deception and known pitfalls of human perception17. Paul R. Thagard illustrated the workability of this additional criterion by applying it for explaining why astrology is a pseudoscience18.

13 M. Boudry, S. Blancke, M. Pigliucci, “What Makes Weird Beliefs Thrive? The Epidemiology of Pseudoscience”, Philosophical Psychology, 2015, Vol. 28, No. 8, pp. 1177–1198. 14 S. O. Hansson, “Science and Pseudo-Science”, 2017. 15 R. DeWitt, “Philosophies of the Sciences: A Guide”, 2010, p. 15. 16 S. O. Hansson, “Science and Pseudo-Science”, 2017. 17 ibid. 18 P. R. Thagard, “Why Astrology Is a Pseudoscience”, Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1978, Vol. 1, pp. 223–234.

192

10 Uncertainty of scientific knowledge

During recent decades, numerous multi-criterial approaches to demarcation have been developed, resulting in various lists of features characteristic of pseudoscience. One such a list may be found in the 2017 encyclopaedic article by Sven O. Hansson: – “Belief in authority: It is contended that some person or persons have a special ability to determine what is true or false. Others have to accept their judgments. – Unrepeatable experiments: Reliance is put on experiments that cannot be repeated by others with the same outcome. – Handpicked examples: Handpicked examples are used although they are not representative of the general category that the investigation refers to. – Unwillingness to test: A theory is not tested although it is possible to test it. – Disregard of refuting information: Observations or experiments that conflict with a theory are neglected. – Built-in subterfuge: The testing of a theory is so arranged that the theory can only be confirmed, never disconfirmed, by the outcome. – Explanations are abandoned without replacement. Tenable explanations are given up without being replaced, so that the new theory leaves much more unexplained than the previous one”.19 It is symptomatic that the above list of criteria, at several points, refers to experimental testing, which means that research projects lacking such a component are a priori excluded from consideration. It should be, however, noted that such projects are seriously considered even in the most mature disciplines of science, such as physics. Example 10.2: In the 2014 article entitled “Defend the integrity of physics”20, two American academics of older generation, George Ellis and Joseph Silk, expressed their disagreement with those researchers who – “breaking with centuries of philosophical tradition of defining scientific knowledge as empirical” – argue that “if a theory is sufficiently elegant and explanatory, it need not be tested experimentally.” According to George Ellis and Joseph Silk, the acceptance of such a methodological approach could open the door for pseudoscientists to claim that their ideas meet similar requirements. One of the examples they put under scrutiny is the string theory 21 which is untestable, but at the same time appealing to the imagination of many prominent physicists. The Austrian philosopher of physics Richard Dawid, in his 2013 book String theory and the scientific method22, identified three kinds of “non-empirical” evidence, which seem to motivate those physicists to trust in that theory, although it has not made any testable predictions about the observable Universe for ca. 50 years of its development: – Despite all the research efforts made during those 50 years, no alternative theory unifying all the fundamental forces in nature has been put forward.

19 S. O. Hansson, “Science and Pseudo-Science”, 2017. 20 G. Ellis, J. Silk, “Defend the Integrity of Physics”, Science, 2014, Vol. 516, pp. 321–323. 21 cf. the Wikipedia article “String Theory” available at https://en.wikipedia.org/wiki/String_the ory [2018-07-07]. 22 R. Dawid, String theory and the scientific method, Cambridge University Press, Cambridge (UK) 2013.

10.4 Uncertainty of scientific reasoning





193

The string theory grew out of the standard model being the empirically validated and accepted theory incorporating all known fundamental particles and forces (apart from gravity) in a single mathematical structure. The string theory has delivered explanations for several other theoretical problems, not only a solution to the unification problem.

The American physicist and string theorist David J. Gross (*1941) expressed the opinion that the above facts “are good for justifying working on the theory, not for saying the theory is validated in a non-empirical way”23. It seems that, following this clarification, one should rename the string theory to string hypothesis.

The American historian and philosopher of science Thomas Nickles (*1943), in conclusion of a survey of approaches to demarcation problem, makes a pragmatic remark that, when allocating funds to competing projects, it is often easier (i.e. less divisive and more in tune with the needs of scientific community and society in general) to judge their potential fertility and future promise than to stigmatise some of them as pseudoscientific24.

10.4 Uncertainty of scientific reasoning The whole enterprise of technoscience relies on the interplay of two kinds of operations: – acquisition of evidential information via passive and active experimentation; – systematic application of the methods of logical reasoning. The uncertainties associated with the first of them will be covered in the next section. Here the analysis of the uncertainty of logical reasoning will be summarised and enhanced, starting from the elementary methods of reasoning (induction, deduction and abduction) and ending up with the logical consistency of the body of scientific knowledge. Inductive reasoning. In mathematics, the inductive reasoning provides the results free of uncertainty, when applied for proving the correctness of recursion theorems. Example 10.3: The method of complete induction may be used for proving that: ðn 2 N, n < ∞Þ ∩ ðq 2 R, q < ∞, q≠1Þ )

N X

qn − 1 =

n=1

23 N. Wolchover, “A Fight for the Soul of Science”, December 16, 2015. 24 T. Nickles, “The Problem of Demarcation: History and Future”, 2013.

qN − 1 q−1

194

10 Uncertainty of scientific knowledge

where N is the set of natural numbers and R – the set of real numbers. The first step of the proof consists in showing that the above formula holds for N = 1: 

q1 − 1 LHS = q1 − 1 = 1 ∩ RHS = = 1 ) LHS = RHS q−1 The second step of proof consists in showing that the correctness of the formula for N − 1 implies its correctness for N: N X n=1

qn − 1 =

N −1 X n=1

qn − 1 + qN − 1 =

qN − 1 − 1 qN − 1 − 1 + qN − qN − 1 qN − 1 = + qN − 1 = q−1 q−1 q−1

Although the method of complete induction may be sometimes applied to mathematical models developed and exploited by empirical sciences, the transfer of the conclusions reached on the real objects of study can be done without uncertainty only exceptionally. In general, inductive reasoning applied to empirical data (evidence) may provide only probable results. As it was already mentioned in Section 4.2, David Hume noted that it is impossible to gain a non-circular justification for induction because the instances of inductive inferences are only legitimate if we suppose that observed regularities provide good grounds for the generalisations we inductively infer from those regularities; this supposition, however, depends on further inductive inferences25. The use of inductive methods of reasoning is, nevertheless, rational; their justification is of practical nature. As Hans Reichenbach put it almost hundred years ago: they are rational because if we don’t employ them, then we are guaranteed to end up with very few true beliefs about the world26. The development of machine learning has confirmed the reasonableness of that opinion: spectacular achievements in this domain, making artificial intelligence to better imitate the human intelligence, are based on the extensive use of inductive methods applied to rich sets of empirical data. This should be not surprising since human learning – both at the personal and social levels, both on the phylogenetic and ontogenetic scales – to a large extend relies on this principle. Deductive reasoning. Karl R. Popper held that the uncertainty of induction does not definitely undermine scientific knowledge because it is mainly acquired via deduction the falsification procedures are based upon27. He referred in this way, indirectly, to the widespread conviction about absolute certainty of deductive reasoning, when applied not only to an isolated statement (proposition) but also to a larger body of knowledge, such as a theory or a system of theories. He did this just at the time when the Austrian mathematician and logician Kurt F. Gödel demonstrated that

25 D. Pritchard, What is this thing called Knowledge?, 2010, p. 109. 26 ibid., p. 107. 27 ibid., p. 109.

10.4 Uncertainty of scientific reasoning

195

even a non-trivial body of purely mathematical knowledge, justified exclusively by deductive means, may be subject to fundamental uncertainty resulting from incompleteness of a set of axioms it is based upon. In 1931, he published an article whose German title has been translated as “On formally undecidable propositions of Principia Mathematica and related systems”28. In that article, he proved two incompleteness theorems: – The first of them states that there is no consistent system of axioms that would be sufficient for proving all the statements about the arithmetic of natural numbers: there will always be statements which are true, but whose veracity cannot be proved on the basis of this system. – The second theorem is an extension of the first, showing that the system of axioms is insufficient to demonstrate its consistency. The practical consequences of those innocent-looking theorems are far-reaching since they imply the potential existence of undecidable statements within any deductive system “containing” the arithmetic of natural numbers, which are neither provable nor refutable, i.e., the axiomatic basis of the system is insufficient for demonstrating their veracity or falsity. Example 10.4: The undecidable statements may appear not only in formal deductive systems but also as paradoxes in everyday verbal communication. The most discussed example is probably “The Liar’s Paradox” whose name is related to its initial formulation “All Cretans are liars” (S1), attributed to an ancient Greek poet Epimenides of Knossos (seventh or sixth century BC): Its contemporary, more precise, formulation “All statements on this folio are false” (S2) was proposed by the French priest Jean Buridan (ca. 1295–1363). Under the assumption that S2 is the only statement on “this folio”, one may analyse its logical value as follows: – if S2 is true, then S2 must be false; – if S2 is false, then S2 must be true. One may try to escape the paradox by introducing a third category of statements, on top of true and false statements – viz., the statements having no truth-value only – and reformulate S2 as follows: “All statements on this folio are false or have no truth-value” (S3). Now: – if S3 is true, then either S3 is false, or S3 has no truth value, i.e., S3 is not true; – if S3 is false, then S3 is true; – if S3 has no truth value, then S3 is true. Each of the above implications leads to a contradiction; that’s why S3 is called Extended Liar’s Paradox.

28 K. F. Gödel, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I”, Monatshefte für Mathematik und Physik, 1931, Vol. 38, pp. 173–198.

196

10 Uncertainty of scientific knowledge

Although the list of non-artificial examples of undecidable statements has not been overwhelming and revolutionising the body of scientific knowledge, it is impossible to ignore its impact on mathematics and the way of thinking about mathematicsdependent disciplines of science. Their appearance frustrated the half-century-lasting attempts, undertaken by F. L. Gottlob Frege and David Hilbert, and by other mathematicians dreaming about a set of axioms sufficient for all mathematics. This fundamental limitation of mathematics made problematic the possibility of axiomatisation of empirical disciplines, including physics. Abductive reasoning. Abduction is the pattern of reasoning from the evidence combined with background knowledge to a hypothesis or explanans that explains that evidence29. While the uncertainty of the results of deductive reasoning has an exceptional character, and the uncertainty of the results of inductive reasoning is diminishing steadily with the accumulation of evidence, the uncertainty related to abductive reasoning is unavoidable in non-trivial cases and is not necessarily diminishing with the growth of the volume of evidential data. This is due to a heuristic element imbedded in this type of reasoning, the element indispensable for creative generation of the set of admissible solutions being a target of such reasoning. Example 10.5: With the discovery of the Laetoli footprints in Tanzania in the mid-1970s (believed to have been made by a small group of Australopithecus afarensis), which were formed some 3.7 million years ago, the direct ancestors of Homo sapiens were thought to have originated in Africa. According to a study published in August 201730, some earlier upright standing human ancestors appeared on the island of Crete 5.7 million years ago. Their fossilised footprints, quite similar to those of the modern man, have been investigated for 7 years by an international team of researchers using most advanced research techniques and methodologies. The authors of the cited article disclosed the uncertainty of their abductive reasoning when writing: The interpretation of these footprints is potentially controversial. The print morphology suggests that the trackmaker was a basal member of the clade Hominini (human ancestral tree), but as Crete is some distance outside the known geographical range of pre-Pleistocene (2.5 million to 11 700 years ago) hominins we must also entertain the possibility that they represent a hitherto unknown late Miocene primate that convergently evolved human-like foot anatomy.

The process of abduction plays a fundamental role in the discovery of scientific hypotheses which – after passing a series of tests aimed at their confirmation or refutation – have a chance to become theories or less important pieces of scientific knowledge. 29 M. Kiikeri, “Abduction, IBE and the Discovery of Kepler’s ellipse. Explanatory Connections”, [in] Electronic Essays Dedicated to Matti Sintonen (Eds. P. Ylikoski, M. Kiikeri) 2001, http://www.helsinki. fi/teoreettinenfilosofia/henkilosto/Sintonen/Explanatory%20Essays/kiikeri.pdf [2018-09-22]. 30 G. D. Gierlinski, G. Niedzwiedzki, M. G. Lockley, A. Athanassiou, C. Fassoulas, Z. Dubicka, A. Boczarowski, M. R. Bennett, P. E. Ahlberg, “Possible Hominin Footprints from the Late Miocene (c. 5.7 Ma) of Crete?”, Proceedings of the Geologists’ Association, 2017, Vol. 128, No. 5–6, pp. 697–710.

10.4 Uncertainty of scientific reasoning

197

Example 10.6: An extensive analysis of a nontrivial case of abductive reasoning in astronomy may be found in a 2001 article by Mika Kiikeri31. The author has examined there the process of abduction underlying the Kepler’s discovery that the Martian orbit is elliptic rather than circular.

Unlike deductive and inductive reasoning, the concept of abduction is almost absent in the programmes of collegial education. This is worth being mentioned in the context of observation that this type of reasoning is omnipresent in extrascientific social practice. Semi-automatic abductive reasoning is underlying interpersonal communication by means of a natural language, both in its oral and written form: the recipient of a message is interpreting its semantic contents. The uncertainty of communication depends not only on the ability of the sender of a message to express its intended contents by means of words but also on the background knowledge and linguistic skills of its recipient (as well as disturbing factors associated with the environment of communication, such as acoustic noise). The incidence of the cases of misunderstanding, both in professional and everyday communication, shows that the uncertainty of this type of abductive reasoning may be very high. Abductive reasoning is of key importance for medical and technical diagnostics; moreover, it plays a significant role in the functioning of judicature. Example 10.7: Any non-trivial criminal investigation and a resulting trial rely upon hypotheses and conclusions drawn from the evidence: the victim’s blood was found on the knife – so, it was the murder weapon; the suspect’s fingerprints were found on the knife; so, he killed the victim. Sherlock Holmes, a fictional private detective created by Sir Arthur I. Conan Doyle, is known for his proficiency with observation, forensic skills and (especially) logical reasoning that borders on the fantastic. He remains a model to follow for many real detectives: his methodology was based on abduction rather than on deduction (as it is commonly suggested in the literary analyses of his accomplishments). From the logical point of view, a criminal investigation is very similar to diagnostic processes aimed at the identification of an illness or detection of a technical fault. There is, however, another application of abductive reasoning in judicature, viz., inferring about the adequate legal norm or regulation from the evidence and other features characterising a considered case32. The relatively high rate of sentences, questioned by courts of higher instance, is well characterising the level of uncertainty related to this kind of inference.

The above example shows the applicability of abductive reasoning not only for the identification of the causes on the basis of data representative of effects but also for the identification of the physical principles underlying the dependence of effects on causes. In the language of mathematical modelling of physical objects

31 M. Kiikeri, “Abduction, IBE and the Discovery of Kepler’s Ellipse. Explanatory Connections”, 2001. 32 G. Tuzet, “Legal Abduction”, Cognitio, 2005, Vol. 8, No. 2, pp. 265–284.

198

10 Uncertainty of scientific knowledge

and phenomena, introduced in Chapter 5, one may say that structural identification of mathematical models may in some cases relay on abductive reasoning. Example 10.8: Structural identification in the case of dynamical systems consists in selecting the form of differential or integral equations having some potential to adequately represent the properties of those systems – relevant to the purpose of modelling. The convolution-type integral equation: y ðt Þ = gðt Þ*x ðt Þ is a mathematical structure most frequently selected as an initial guess for modelling causal dynamic relationships in measurement-and-control engineering. In this equation, t is a real-valued scalar variable modelling time; x ðt Þ and y ðt Þ are real-valued scalar functions modelling the cause and effect, respectively, both assumed to be zero for t < 0, and gðt Þ is a function of the same category, modelling the effect corresponding to the cause that may be adequately modelled with a function x ðt Þ of a special form, viz., with the Dirac delta function defined in the following way: ( δðt Þ =

+∞

for t = 0

0

otherwise

+ð∞

δðτ Þdτ = 1

and −∞

The function gðt Þ is a compact representation of the causal relationship under study. Its discrete values or parameters are estimated during parametric identification on the basis of empirical data representative of the corresponding realisations of x ðt Þ and y ðt Þ. The choice of the parameterisation scheme for gðt Þ is part of structural identification and is based on abductive reasoning. One may choose, e.g., among the following options: gðt Þ = p1 expð − p2 t Þ, gðt Þ = p1 expð − p2 t Þ + p3 expð − p4 t Þ, gðt Þ = p1 expð − p2 t Þ sinðp3 t Þ, ... where p1 , p2 , … are parameters to be estimated during parametric identification. The choice of the first and second option may prevent the model from being able to reproduce oscillations in y ðt Þ launched by an abrupt change in x ðt Þ; the third option may turn out to be deficient in reproducing the rate of attenuation of y ðt Þ after an abrupt change in x ðt Þ. The abductive reasoning refers in this case to both a priori and a posteriori criteria. Among the first of them, simplicity and physical laws, underlying the functioning of a modelled system, are probably most frequently used. The a posteriori criteria are used during the iterative assessment of the model under development (cf. Figure 5.3). Most frequently, these are usually uncertainty indicators characterising the discrepancy between the behaviour of the model and of the modelled object. It may happen that the use of the convolutiontype equation does not provide expected quality of the model, regardless of the way of its parameterisation. Then it should be replaced with a nonlinear integral equation, e.g., with a Hammerstein–Wiener equation: y ðt Þ = Fy ½gðt Þ*Fx ½x ðt Þ where Fx and Fx are real-valued nonlinear functions that, like gðt Þ, may be subject to parameterisation.

Logical consistency of scientific knowledge. The body of scientific knowledge is neither a collection of independent pieces of information (such as scientific laws and theories) nor a fully consistent system of such pieces. Even in the case of mathematics, the David Hilbert’s dream about total consistency turned out to be unrealisable

10.4 Uncertainty of scientific reasoning

199

when Kurt F. Gödel had published his incompleteness theorems. This spectacular turn in the history of science did not invalidate the intention to make the body of scientific knowledge as consistent as possible. The progress in science is about broadening the body of knowledge with new theories that have survived the process of confirmation. Once included in that body, they may be used for testing new hypothesis and models. When trying to reconstruct the structure of the body of scientific knowledge, one may face the problem of infinite regress: a set of theories and laws S − 1 has been used for justification of a set of theories and laws S0 , a set of theories and laws S − 2 was used for justification of the set of theories and laws S − 1 , a set of theories and laws S − 3 had been used for justification of the set of theories and laws S − 2 , etc. Ordinary researchers are, as a rule, taking care only of “local” consistency of the system of scientific knowledge, i.e., on the consistency of pieces of knowledge they are working on with “neighbour” pieces of knowledge, most frequently belonging to a single scientific discipline. For checking this consistency, they use means of logical reasoning and some non-provable assumptions, such as the paradigm of causality, the paradigm of the uniformity of nature or the system paradigm. Exceptionally, they start to analyse the validity of justification for those “neighbour” pieces of knowledge. This research practice is in a way mirrored in two philosophical approaches to the problem of infinite regress in knowledge justification, viz., (epistemological) foundationalism and (epistemological) coherentism. Foundationalism is based on a conviction that some “basic beliefs” do not require any justification, and therefore may be safely used for justifying “more advanced” or “higher-order” beliefs; those basic beliefs are self-evident or derived from reliable cognitive sources such as intuition or long-term experience. Empiricists typically take the content of basic beliefs to be phenomenal (i.e. directly related to sense data considered to be indubitable), while rationalists focus rather on innate ideas and beliefs produced by introspection, which are considered to be indubitable. The logical positivists were supposed to be advocates of foundationalism, though their debate over the so-called protocol sentences shows that they have had a quite nuanced conception of the alleged foundations of knowledge33. The origin of basic beliefs is a main weak point of foundationalism. Hence, the motivation for choosing an alternative approach called coherentism which requires the scientific statements to be coherent with all other statements of the body of knowledge, accepted by the relevant research community. Coherentism is based on a conviction that a hierarchical system of justifications should be replaced with a network-type system of justifications in which an individual belief is justified if it fits together (coheres) with the rest of the beliefs in such a system.

33 S. Psillos, Philosophy of Science A–Z, 2007, p. 95.

200

10 Uncertainty of scientific knowledge

The inception of coherentism is attributed to the British philosophers Francis H. Bradley (1846–1924) and Bernard Bosanquet (1848–1923); although, its first mature versions were proposed by two prominent representatives of the Vienna Circle: Otto Neurath and Carl G. Hempel. Its further development was accomplished by the American philosophers Keith Lehrer (*1936), Gilbert Harman, Laurence BonJour (*1943) and William G. Lycan (*1945). They all rejected the division of beliefs into basic and derived, and asserted that all beliefs within a body of knowledge are justified insofar as the body as a whole is justified. They have differed, however, significantly in understanding the concept of coherence of beliefs, and its relationship to the concept of truth. The contemporary advocates of coherentism agree that the lack of contradiction is not sufficient for two beliefs to be coherent, and that each belief of the body of knowledge should either explain some other beliefs or be explained by other beliefs. Example 10.9: There is no contradiction between the following statements: – “The acceleration of a ball is inversely proportional to its mass.” – “The standard atomic weight of calcium is 20.078(4).” although the second of them is false. The first statement is coherent, e.g., with the statement: – “The acceleration of a ball is proportional to the force exerted on it.”

The advocates of coherentism differ, however, on the importance they attribute to other criteria of coherence, such as enumerative or statistical (non-explanatory) induction, or the scope and nature of agreement among researchers, concerning the contents of a coherent body of knowledge. The belonging to a coherent system of beliefs is, by definition of coherentism, necessary for a belief to be justified. There are, however, radical advocates of coherentism who claim that this is also a sufficient condition. Their classical argument is of probabilistic nature: when multiple unreliable sources of information operate independently and generate messages converging to the same findings, the probability that those findings are correct increases. Example 10.10: If each of two independent experts fails to express a true belief with the probability p, then the probability of the coincidence that they both err is p2 , and that they both are right is ð1 − pÞ2 . Thus, if they both express the same belief, one may gather that it is true with the probability:

q=

ð1 − pÞ2

p

.

.

.

.

.

ð1 − pÞ2 + p2

q

.

.

.

.

.

The probability q is quickly increasing towards 1 with p diminishing from 0.250 (25%) to 0.010 (1%), as shown in the above table.

10.4 Uncertainty of scientific reasoning

201

A more developed typology of coherentist positions, as well as overviews of arguments in favour and against them, may be found in the reliable on-line encyclopaedic sources34. From pragmatic (instrumentalist) point of view, both described approaches to the problem of infinite regress in beliefs justification, foundationalism and coherentism, are acceptable as long as they are prolific in generation of useful knowledge. For realists, both are deficient in providing reliable bridge between knowledge and reality, i.e., a meta-justification for primary beliefs in foundationalism, and a meta-justification for an unanchored network of beliefs in coherentism. One of the responses to this criticism is infinitism, which holds that there are two necessary (but not jointly sufficient) conditions for a reason in a chain of beliefs f..., S − 2 , S − 1 , S0 g to be capable of enhancing the justification of the belief S0 : – In contrast to coherentism, no reason can be S0 itself, or equivalent to a conjunction containing S0 as a conjunct (i.e. circular reasoning is excluded). – In contrast to foundationalism, no reason S − n is sufficiently justified in the absence of S − n − 1 (i.e. there are no foundational reasons)35. Infinitists do not deny that every real-world chain of reasons ends, but they deny that “there is any reason which is immune to further legitimate challenge”36. Infinitism is based on a conviction that an infinite series of justifications of more advanced beliefs by more basic beliefs may be “convergent” like some infinite mathematical series. Although appealing by this analogy to the nineteenth-century struggles with the concept of infinity in mathematics and logic, this methodological stance has very little (if any) impact on the actual research practice. Infinitism is also rather unpopular among philosophers of science. It is, however, very persistently supported by the American epistemologist Peter D. Klein (*1940) who has developed its sophisticated version according to which an infinite regress of reasons is a necessary, but not sufficient condition for a belief to be justified. A common objection raised against this requirement is that any such condition could account for undermining the rationale behind the regress condition itself. The state of the art in the philosophical inquiry on infinitism is well presented in the already cited 2014 collection of essays, edited by the Canadian epistemologist John Turri and Peter D. Klein himself 37.

34 e.g. in: P. Murphy, “Coherentism in Epistemology”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), http://www.iep.utm.edu/coherent/ [2017-09-17]; E. Olsson, “Coherentist Theories of Epistemic Justification”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Spring 2017 Edition, https://plato.stanford.edu/archives/spr2017/entries/justep-coherence/ [2017-09-18]. 35 J. Turri, P. D. Klein (Eds.), Ad Infinitum: New Essays on Epistemological Infinitism, OUP Oxford, 2014, p. 1. 36 ibid., p. 2. 37 ibid.

202

10 Uncertainty of scientific knowledge

For research practitioners, a viable alternative to infinitism is foundherentism combining strong aspects of foundationalism with strong aspects of coherentism. Foundherentism was developed and defended by the British philosopher Susan Haack (*1945) in her 1993 book Evidence and Inquiry: Towards Reconstruction in Epistemology. The list of precursors to her view includes Bertrand Russell, whose epistemology comprises both empirical foundations and coherence as components of justification, and Willard V. O. Quine who, when supporting coherentism in general, was inclined to give a special epistemic status to some beliefs (e.g. observational beliefs), which are not justified exclusively on the basis of their internal or inferential connections with other beliefs38. According to the contemporary version of foundherentism, it is possible to use: – experience as the justification of empirical beliefs (what foundationalism does, but coherentism does not); – pervasive mutual dependence among beliefs as a means of their justification (what coherentism does, but foundationalism does not).39 The foundherentist approach to the problem of infinite regress in scientific justification has not reached much resonance among the philosophers of science: enough to say that the Internet Encyclopedia of Philosophy and The Stanford Encyclopedia of Philosophy do not contain separate articles devoted to this approach. It may be, however, quite natural for some domains of research, e.g., for metrology. Example 10.11: Measurement science and technology is an interdiscipline because it is integrating knowledge traditionally belonging to distinct scientific disciplines (such as physics, chemistry or biology) and to various branches of engineering (such as mechanical, electrical or optical engineering). It is, at the same time, a transdiscipline because its methods and tools appear in all mature disciplines of technoscience. It may be, therefore, considered a representative example of research field where foundherentist approach to beliefs justification has had a long tradition. The core of measurement science and technology is a hierarchical system of quantities and measurement units, outlined in Section 6.2. It includes seven base quantities and several levels of derived quantities which are defined by means of corresponding mathematical models identified according to the methodology described in Chapter 5. The base quantities and the primary standards of their units have been internationally agreed on the basis of a series of legal conventions, and one may easily imagine alternative sets of quantities which could equally well or even better serve the needs of economy and science. Once agreed, however, they are subject to institutional control and scrutiny to guarantee the metrological coherence, which is called metrological traceability, and defined in VIM as a “property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty”40. In this definition, a reference is understood, alternatively, as:

38 S. Psillos, Philosophy of Science A–Z, 2007, p. 39. 39 C. De Waal, “Foundherentism”, [in] American Philosophy: An Encyclopedia (Eds. J. Lachs, R. B. Talisse), Routledge, New York – London 2008, pp. 297–298. 40 International vocabulary of metrology – Basic and general concepts and associated terms (VIM3), 2008, definition #2.41.

10.5 Uncertainty of observation

– – –

203

the definition of a measurement unit through its practical realisation; a measurement procedure including the measurement unit for a non-ordinal quantity; a measurement standard.

It should be noted that, although the concept of metrological traceability assumes a hierarchical order of quantities, the cross-relationships among quantities are not excluded; they appear already among the base quantities, as shown in Table 6.1.

The history of measurement science and technology demonstrates the importance of a temporal aspect in the development of scientific knowledge, thus – in the process of justification of scientific beliefs. This aspect has been undertaken by the American historian and philosopher of science Hasok Chang (*1967) in his 2007 article whose subtitle Beyond Foundationalism and Coherentism41 is travestying the title of the famous book by Friedrich W. Nietzsche Beyond Good and Evil. The author shows that operations aimed at checking the coherence of new beliefs under test with fundamental and derived beliefs, temporarily included in the body of knowledge, have to be repeated by researchers after any significant change of this body of knowledge, thus iteratively with the progress of science. Hasok Chang calls his approach progressive coherentism, and illustrates its functioning with the history of temperature measurements. The Canadian philosopher Brian Lightbody, in his 2006 paper “Virtue Foundherentism”42, argues that foundherentism needs to be supplemented with an account of the specific epistemic virtues a researcher is required to possess and exercise in order to fully justify any empirical inquiry. This conclusion is an important link between the first 10 chapters of this book, devoted to methodological aspects of technoscientific research, and its next chapters which are predominantly devoted to ethical aspects of research.

10.5 Uncertainty of observation For the sake of simplicity and convenience, the methodological sources of uncertainty have been presented in Sections 10.1–10.4 under a tacit assumption that pieces of empirical evidence used for generation of scientific knowledge, i.e., results of observation and measurement, are free of uncertainty. The falsehood of this assumption with respect to measurement has been already demonstrated in Chapters 5 and 6 where the omnipresence of uncertainty in mathematical modelling and measurement has been emphasised. The results of observation are even more deficient in this respect because

41 Hasok Chang, “Scientific Progress: Beyond Foundationalism and Coherentism”, Royal Institute of Philosophy Supplement, October 2007, Vol. 61, pp. 1–20. 42 B. Lightbody, “Virtue Foundherentism”, Kriterion, 2006, No. 20, pp. 14–21.

204

10 Uncertainty of scientific knowledge

their uncertainty cannot be quantified, i.e., expressed in numbers as it is the case in mathematical modelling and measurement. The results of observations are uncertain due to imperfection of human senses and special modes of processing of sensory information by human brains, oriented on survival rather than on generation of knowledge, when those two objectives are in conflict. Example 10.12: The following experiment demonstrates the uncertainty of human temperature sensation. Let’s imagine that we have three buckets, the first filled with hot water, the second with cold water and the third with lukewarm water: – We plunge the right hand in hot water and the left hand in cold water. – After a while, we take them out, and put them both in lukewarm water; we feel with the left hand that it is warm, and with the right hand that it is cold. – We check by means of a thermometer that the temperature of lukewarm water is quite uniform in the whole volume of the third bucket. Why do we trust the thermometer rather than our senses?43

Although the limitations of sensory observation may be alleviated by means of instruments, such as microscopes and telescopes, the uncertainty of observation cannot be completely eliminated for two reasons: – human senses still cannot be completely eliminated from the chain of qualitative information processing; – instruments for qualitative experimentation are subject to the same imperfections as instruments for quantitative experimentations, i.e., measuring systems.

10.6 Measurement uncertainty Measurement uncertainty is a central topic of epistemology of measurement, which is the study of the relationships between measurement and knowledge. The purview of this study includes the conditions under which measurement produces knowledge; the content, scope, justification and limits of such knowledge; the reasons why particular methodologies of measurement and standardisation succeed or fail in supporting particular knowledge claims and the relationships between measurement and other knowledge-producing activities such as experimentation, modelling and theory building44. From practical point of view, the methods for evaluation of uncertainty are of key importance and, therefore, will be covered here in more detail. As already mentioned in Chapter 6, measurement uncertainty is defined by the VIM as a “non-negative parameter characterising the dispersion of the quantity 43 Hasok Chang, “Scientific Progress: Beyond Foundationalism and Coherentism”, October 2007. 44 E. Tal, “Measurement in Science”, 2017.

10.6 Measurement uncertainty

205

values being attributed to a measurand, based on the information used”45. The recommended methods for estimation of that parameter have been internationally and transdisciplinary agreed and specified in the GUM46. As stated there, an ideal method for evaluating and expressing the uncertainty of the result of a measurement should be universal, i.e., it should be applicable to all kinds of measurements and to all types of input data used in measurements. Moreover, the actual quantity, used to express uncertainty, should be: – internally consistent, i.e., it should be directly derivable from the components that contribute to it, as well as independent of how these components are grouped and of the decomposition of the components into subcomponents; – transferable, i.e., it should be possible to use directly the uncertainty evaluated for one result as a component in evaluating the uncertainty of another measurement in which the first result is used.

10.6.1 Basic methods for evaluation of measurement uncertainty There is a metrological tradition to distinguish between direct and indirect measurement methods, where the first are understood as directly referring to a standard of the relevant measurand, while the second provide measurand estimates computed on the basis of the results obtained by means of some direct measurement methods47. This classification has lost its clarity today since computing means have been incorporated into measuring systems, and physical standards have been replaced with their digital representations created during calibration of those systems. Consequently, the results of almost all today’s measurements are obtained via digital processing of data using mathematical models, which appear in the meta-model of measurement introduced in Chapter 6. Therefore, the propagation of uncertainties through those models is a principal operation underlying the procedures for evaluation of uncertainties. In the vast majority of practically important cases, those models may be decomposed into elementary algebraic structures of the form:

45 International vocabulary of metrology – Basic and general concepts and associated terms (VIM3), 2008, definition #2.26. 46 Joint Committee for Guides in Metrology (BIPM+IEC+IFCC+ILAC+ISO+IUPAC+IUPAP+OIML), Evaluation of measurement data – Guide to the expression of uncertainty in measurement; Joint Committee for Guides in Metrology (BIPM+IEC+IFCC+ILAC+ISO+IUPAC+IUPAP+OIML), Supplement 1 to the ‘Guide to the expression of uncertainty in measurement’ – Propagation of distributions using a Monte Carlo method, 2008; Joint Committee for Guides in Metrology (BIPM+IEC+IFCC+ILAC+ISO+IUPAC+IUPAP+OIML), Supplement 2 to the ‘Guide to the expression of uncertainty in measurement’ – Extension to any number of output quantities, 2011. 47 The definitions #2.5 and #2.52 are the only reminiscence of this tradition in International vocabulary of metrology – Basic and general concepts and associated terms (VIM3), 2008.

206

10 Uncertainty of scientific knowledge

y = f ðxÞ where x ≡ ½x1 x2 ...T is a vector of variables modelling the sources of uncertainty, y is a variable modelling the quantity whose uncertainty is to be evaluated and f is a linear or nonlinear function. The most natural way to express uncertainty of the variables x1 , x2 , . . . and y are intervals:  inf sup   inf sup    x1 , x1 , x2 , x2 , ... and yinf , ysup where the superscripts “inf” and “sup” indicate, respectively, the smallest and the largest value of a variable. Example 10.13: Let’s assume that x ≡ ½x1 x2 T ; then: sup sup – If y = x1 + x2 , then y inf = x1inf + x2inf and y sup = x1 + x2 . sup sup inf inf sup – If y = x1 − x2 , then y = x1 − x2 and y = x1 − x2inf .   sup sup sup sup  and y sup = max x1inf x2inf , – If y = x1  x2 , then y inf = min x1inf x2inf , x1inf x2 , x1 x2inf , x1 x2 sup sup sup sup x1inf x2 , x1 x2inf , x1 x2 g.

The complexity of operations on intervals grows very quickly with the dimensionality of the vector x, especially if the computational implementation of the function requires the multiple use of its elements. Thus, a relatively simple concept of interval arithmetic faces numerous difficulties in practice, and therefore it is still losing competition in confrontation with a more traditional probabilistic approach based on the assumption that x is a realisation of a random vector x whose probability density function is px ðxÞ, and consequently y is a realisation of a random variable y whose probability density function py ðyÞ results from the propagation of the distribution of x through the function f . Example 10.14: Let’s assume that y = x1 + x2 , where x1 and x2 are realisations of two independent random variables following the normal distributions whose respective probability density functions are as follows:  !  ! 1 1 x − μ1 2 1 1 x − μ2 2 p1 ð x Þ = pffiffiffiffiffiffi exp − and p2 ð x Þ = pffiffiffiffiffiffi exp − σ1 σ2 2 2 2π σ 1 2π σ 2 i.e.: px ðxÞ = p1 ðx1 Þp2 ðx2 Þ =

" #   ! x1 1 1 x1 − μ1 2 1 x2 − μ2 2 exp − − with x ≡ σ1 σ2 2πσ1 σ 2 2 2 x2

following It may be shown that, in this case, y is a realisation of the random variable y q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi the normal distribution with the expected value μ = μ1 + μ2 and the standard deviation σ = σ 21 + σ 22 :  1 1  x − μ 2 py ð x Þ = pffiffiffiffiffiffi exp − 2 σ 2π σ

207

10.6 Measurement uncertainty

The complexity of distribution propagation grows very quickly with the dimensionality of the vector x, and with the heterogeneity of the distribution of x. Fortunately, in the routine measurement practice, there is no need to have the complete informa^ ^ of its expected value μ and an estimate σ tion on the distribution of y: an estimate μ of its standard deviation σ suffice. Moreover, they can be determined under a simplifying assumption that the function f may be linearised in the small vicinity of the expected value μ ≡ ½μ1 μ2 ...T because, as a rule, the evaluation of uncertainty is useful in research practice if this uncertainty is small enough. Such a linearisation consists in the development of the function f into the following Taylor series: ∂f ðxÞ ∂f ðxÞ ðx1 − μ1 Þ + ðx2 − μ2 Þ + ... y = f ðxÞ = f ðμÞ + ∂x1 x = μ ∂x2 x=μ

Using this approximate representation of f ðxÞ, one may obtain the following estimates of the basic parameters of the distribution of y: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi # " #T u" u ∂f ðxÞ ∂f ðxÞ ∂f ðxÞ ∂f ðxÞ t ^= ^ = f ðμÞ and σ ...  Cov½x  ... μ ∂x ∂x ∂x ∂x 1

x=μ

2

x=μ

1

x=μ

2

x=μ

where Cov½x is the covariance matrix of the vector x. If the components of this vector are statistically independent, then Cov½x is diagonal, and the above formula takes on the form: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi !2 u !2 u ∂f ðxÞ ∂f ðxÞ ^=t σ21 + σ22    σ ∂x1 x = μ ∂x2 x = μ The main limitation of the linearisation-based method of moments propagation is its inability to disclose a systematic deviation of μ with respect to f ðμÞ, implied by the nonlinearity of the function f in the vicinity of μ. If it is important in the considered metrological case, then the Taylor series should be extended with the secondorder (quadratic and bilinear) terms, viz.:   1 ∂f ðxÞ ∂f ðxÞ 2 ð x − μ Þ and ð x − μ Þ x − μ for i, j ≠ i = 1, 2, ... i i j i i j 2 ∂xi2 x = μ ∂xi ∂xj x = μ ^ with the component: Each such term will contribute to the estimate μ 1 ∂f ðxÞ ∂f ðxÞ 2 σ or ci, j 2 ∂x2i x = μ i ∂xi ∂xj x = μ where ci, j is the element of the covariance matrix Cov½x, located in its ith row and jth column. In this way, however, initially simple procedure gets cumbersome in practice. A reliable alternative, the Monte Carlo method, is suggested by the

208

10 Uncertainty of scientific knowledge

Supplement 1 to the GUM. The main practical advantage of the Monte Carlo method is the simplicity of its software implementation. A key element of this method is the use of pseudorandom vectors xð1Þ, xð2Þ, ..., xðRÞ simulating realisations of the vector x. Once, this sequence is generated, the estimates of the basic parameters of the distribution of y may be obtained in the following way: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi R R 1X 1 X ^ ðRÞ = ^ ðRÞ2 ^ðRÞ = f ðxðrÞÞ and σ ½f ðxðrÞÞ − μ μ R r=1 R − 1 r=1 Both estimates are unbiased, i.e.: ^ ðRÞ ! σ ^ ðRÞ ! μ and σ μ R!∞

R!∞

It should be noted that the number of repetitions R, sufficient for guaranteeing reliable estimates of μ and σ may be very large; so, the whole Monte Carlo exercise may require considerable computing power even in the case of a relatively simple function f , especially if this exercise is to be repeated for a considerable number of different values of the vector μ. Example 10.15: In a 1998–2001 series of author’s articles48, the Monte Carlo method was used as a reference method for the evaluation of measurement uncertainty by means of three other methods, viz., the algebra of intervals, the algebra of fuzzy variables and the algebra of expected values and standard deviations. The test examples included recursive and non-recursive filters applied in spectrophotometers for improving their resolution. The computation time, necessary for obtaining comparable results, was ca. 500 times larger for the Monte Carlo method than for the propagation of expected values and standard deviations.

The following indicators are recommended by the GUM to be used for expression of uncertainty: – standard uncertainty defined as “uncertainty of the result of a measurement expressed as a standard deviation”49;

48 T. Szafrański, R. Z. Morawski, “Accuracy of Measurand Reconstruction – Comparison of Four Methods of Analysis”, Proc. IEEE Instrumentation and Measurement Technology Conference – IMTC98 (St. Paul, MN, USA, May 18–21,1998), pp. 32–35; T. Szafrański, R. Z. Morawski, “Dealing with Overestimation of Uncertainty in Algorithms of Mesurand Reconstruction”, Proc. XVth IMEKO World Congress (Osaka, Japan, 13–18 June 1999); T. Szafranski, P. Sprzeczak, R. Z. Morawski, “An Algorithm for Spectrmetric Data Correction with Built-in Estimation of Uncertainty”, XVIth IMEKO World Congress (Vienna, Austria, September 25–28, 2000); T. Szafrański, R. Z. Morawski, “Efficient Estimation of Uncertainty in Weakly Non-linear Algorithms for Measurand Reconstruction”, Measurement, 2001, Vol. 29, pp. 77–85. 49 Joint Committee for Guides in Metrology (BIPM+IEC+IFCC+ILAC+ISO+IUPAC+IUPAP+OIML), Evaluation of measurement data – Guide to the expression of uncertainty in measurement, item #2.3.1.

10.6 Measurement uncertainty

209

– combined standard uncertainty defined as “standard uncertainty of the result of a measurement when that result is obtained from the values of a number of other quantities, equal to the positive square root of a sum of terms, the terms being the variances or covariances of these other quantities weighted according to how the measurement result varies with changes in these quantities”50; – expanded uncertainty defined as “quantity defining an interval about the result of a measurement that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand”51. The term large fraction in the above-quoted definition of expanded uncertainty is to be understood as the coverage probability or the level of confidence of the interval. This level cannot be strictly determined without information about the probability distribution characterising the measurement result. Such information is available in the following cases: – The measurements may be repeated (cæteris paribus) R times, where the value of R is large enough to make possible an adequate approximation of the probability density function of y. – The probability density function of y may be determined numerically (e.g. by means of the Monte Carlo method) because the probability distribution of the vector x is known. Example 10.16: Let’s assume that y = x1 + x2 , where x1 and x2 are corrupted with the discretisation errors, Δx1 and Δx2 , resulting from the analogue-to-digital conversion52 performed in a measuring system under analysis. Because of their origin, those errors are non-random (fully predictable), but for the convenience of uncertainty analysis, they may be modelled with random variables, Δx 1 and Δx 2 , following the uniform probability distributions. The probability density function, characterising those distributions, has the form: ( px ðΔx Þ =

1=q 0

  for Δx 2 − q2 , + q2 otherwise

where q is the quantisation step. Since the absolute error in y caused by the errors in x1 and x2 is Δy = Δx1 + Δx2 , the probability density function of the random variable Δy, modelling this error, can be calculated according to the formula: ( py ðΔy Þ = px ðΔx1 Þ*px ðΔx2 Þ =

− q1 jΔy j +

1 q

for q 2 ½ − q, q

0

otherwise pffiffiffiffiffi In this case, the standard uncertainty of x1 and x2 is σ x = q= 12, the standard uncertainty of y is pffiffiffi σ y = q= 6, the expanded uncertainty of x1 and x2 is ux = q and the expanded uncertainty of y is uy = 2q.

50 ibid., item #2.3.4. 51 ibid., item #2.3.5. 52 cf. the Wikipedia article “Analog-to-Digital Converter” available at https://en.wikipedia.org/ wiki/Analog-to-digital_converter [2018-07-22].

210

10 Uncertainty of scientific knowledge

The evaluation of expanded uncertainty in the above example is free of ambiguity; the 100% coverage of the probable errors is possible because both probability density functions have finite supports. In the majority of practically interesting cases, those functions have infinite supports, and the 100% coverage is getting completely non-informative. Example 10.17: The normal distribution is used most frequently in routine uncertainty analysis because, according to the central limit theorem53, it is a limit of the distribution of a linear combination of independent, appropriately normalised random variables – regardless of their individual distributions – when their number is growing unlimitedly. Let’s consider the sum of random variables: Δy = Δx 1 + Δx 2 + ... + Δx N under the assumption that all components are statistically independent and follow the same uniform distribution whose probability density function px ðΔx Þ is defined in the previous example. The probability density function of Δy may be calculated according to the formula: py ðΔy Þ = px ðΔx1 Þ*px ðΔx2 Þ*...*px ðΔxN Þ The result of this operation for q = 1 and several values of N is shown in Figure 10.1. For N = 20, the maximum discrepancy between py ðΔy Þ and the probability density function of the normal distribution (Gauss function) is ca. 2.3  10 − 3 , and the root-mean-square discrepancy is ca. 3.9  10 − 4 . Thus, the Gauss function is an adequate tool for approximation of probability density functions of random variables representative of complex effects of multiple causes.

N=2 N=6 N = 10 N = 20

1

py (y)

0.8

0.6

0.4

0.2

0 –6

–4

–2

0 y

2

4

6

Figure 10.1: Dependence of py ðΔy Þ = px ðΔx1 Þ*px ðΔx2 Þ*...*px ðΔxN Þ for q = 1 on the number of convolved px ðΔxN Þ.

53 cf. the Wikipedia article “Central Limit Theorem” available at https://en.wikipedia.org/wiki/Cen tral_limit_theorem [2018-07-22].

10.6 Measurement uncertainty

211

If the probability density function of y has infinite support, as in the case of the normal distribution, then the expanded uncertainty is, as a rule, expressed in the form: ^ _ ^ uy = μ y − y + kσy ^y is an estimate of the variance ^y is an estimate of the expected value of y, σ where μ _ of y, y is the reference value of y, and k is the coverage coefficient, most frequently ^y may be obtained directly on ^y and σ assuming the value of 3 or 5; the estimates μ the basis of the results of repeated measurements of y, or via propagation of the distribution of x (or of the parameters of this distribution) through the function f . The greater the value of k, the larger the number of possible results of measurement   covered by the interval y_ − uy , y_ + uy and, at the same time, less realistic the evaluation of uncertainty. ^ y = y_ Example 10.18: If the random variable y follows the normal distribution with the parameters μ   ^y , then the interval y_ − uy , y_ + uy covers: and σ 68.26895 % for k = 1, 95.44997 % for k = 2, 99.73002 % for k = 3, 99.99366 % for k = 4, 99.99994 % for k = 5, of the possible realisations of y.

10.6.2 Advanced methods for evaluation of measurement uncertainty The methods for evaluation of uncertainty by means of statistical tools are tagged in the GUM with the label type A evaluation54. Those methods are not applicable if neither statistical information (data) characterising the vector x nor statistical information (data) characterising the variable y is available. Then the evaluation of the uncertainty of y may be only based on non-statistical information, such as previously acquired measurement data, experience with or general knowledge of the behaviour and properties of relevant materials and instruments, manufacturer’s specifications, data provided in calibration and other certificates or uncertainties assigned to reference data taken from handbooks55. The methods for evaluation of

54 Joint Committee for Guides in Metrology (BIPM+IEC+IFCC+ILAC+ISO+IUPAC+IUPAP +OIML), Evaluation of measurement data – Guide to the expression of uncertainty in measurement, item #2.3.2. 55 ibid., item #4.3.1.

212

10 Uncertainty of scientific knowledge

uncertainty by means of non-statistical tools are tagged in the GUM with the label type B evaluation56. The introduction to evaluation of measurement uncertainty, presented in this section, is not intended to replace the GUM or systematic approach based on the meta-model of measurement described in Chapter 6. It has been impossible to specify all the patterns of uncertainty evaluation which are discussed on the 300 pages of the GUM with supplements; a fortiori, no such expectation should be raised with respect to this modest subsection. It seems to be, however, worth signalling several special issues which have been frequently overlooked in the routine evaluation of measurement uncertainty, viz.: – unintended influence of measurement tools on a system under measurement; – excessive amplification of errors during inversion of a mathematical model of conversion; – overlooked temporal aspect of measurement process; – unexpected consequences of the correlation of uncertainty components. – unexpected consequences of excessive errors (called outliers). The meta-model of measurement, described in Chapter 6, includes a mathematical model of the dependence of the raw result of measurement on the measurand – the model which represents measurement information processing not only in the measuring system but also in the system under measurement and in the interface between both systems. In this way, the unintended influence of the measuring system on a system under measurement is taken into account to the extent this model is adequate and accurate. In a more traditional approach, however, both systems are modelled separately, and consequently this influence is ignored. Example 10.19: When measuring the electromotive force e of an electrical battery by means of a commercially available voltmeter, one is inclined to consider the reading from that voltmeter as a final result of measurement. There are two tacit assumptions behind this practice, viz.: – the internal resistance of the battery RB is zero; – the internal resistance of the voltmeter RV is infinite. A more “realistic” model of this situation is shown in Figure 10.2. It follows from this figure that the reading e~ is subject to an absolute error: Δ~ e=~ e−e=

RV RB e−e= − e RB + RV RB + RV

which tends to zero if RB ! 0 or RV ! ∞. For RB  RV , it depends on the ratio s ≡ RB =RV in an approximately quadratic way: Δ~ e ffi − sð1 − sÞe. The uncertainty of the final result of measurement may be reduced if the model from Figure 10.2 is applied for computing a more accurate estimate of the electromotive force, viz.:

56 ibid., item #2.3.3.

10.6 Measurement uncertainty

Battery

213

Voltmeter

RB e

RV

Figure 10.2: Measurement of the electromotive force: a circuit model.

^ e=

~B + R ~V R e~ ~V R

~B and R ~V are approximate values of the internal resistances RB and RV , obtained from where R the technical specifications of the battery and of the voltmeter, respectively. This estimate is ~V ; the absolute error of e^ may be assessed as ~B and R also uncertain due to the uncertainties of R follows: e − ej ≤ j^

~B R ~V R

! ΔR ~B ΔR ~V + e~ ~B ~V R R

~ V are expanded uncertainties of R ~B and R ~V , respectively. This means that, if ~B and ΔR where ΔR ~B and R ~V are of order 10 − k , the uncertainty of e^ is ca. RB  RV and the relative uncertainties of R k orders of magnitude smaller than that of e~.

According to the meta-model of measurement, described in Chapter 6, any measurement includes inversion of the mathematical model of the relationship between the raw result of measurement and the measurand (and possibly influence quantities). Since this relation is causal, the adequate model must contain integration, and the inverse model – differentiation. The latter operation is risky from numerical point of view since it may excessively amplify the errors in the data, as demonstrated in Example 7.8. Thus, special attention should be paid to such cases, and appropriate means (regularisation) applied. However, the temporal aspect of causal processes in a measuring system should be taken into account even if inversion of the mathematical model of conversion does not provoke excessive amplification of uncertainty. The time delay of effect with respect to the cause may limit the speed at which measurements can be repeated. If this limitation is ignored, an additional component of uncertainty may appear – the component due to transient processes in the tandem of the system under measurement and the measuring system. Of course, the problem may be solved not only by slowing down the measurement process but also by modelling dynamic processes in the measurement channel.

214

10 Uncertainty of scientific knowledge

Example 10.20: A conveyor belt is an appliance used for moving boxes along inside a factory. If equipped with appropriate sensors, it makes possible approximate determination of the mass of each box falling on it. Waiting for a kind of steady state of the belt would be a loss of production time; thus, that mass is estimated on the basis of signals whose time derivative is far from zero. Let’s assume that the response of a sensor to the boxes falling on the belt can be modelled with the function: y ðt Þ =

  t − tk 1ðt − tk Þ mk 1 − exp − T k=1

K X

where mk are masses of the boxes falling at the consecutive time instants tk , and 1ðt Þ is a step function assuming the value of 0 for t < 0 and the value of 1 for t ≥ 0. Then both the values of mk and the values of tk can be estimated on the basis of a series of samples of the signal y ðt Þ acquired at the intervals smaller or comparable with the time constant T :57

On the whole, the routine users of measuring systems are aware of the fact that correlations among the components of the random vector x may modify the standard deviation of the random variable y: the linear part of the Taylor series of the function f ðxÞ suffices to take into account this effect. Less common is the awareness of the influence of the correlation among the components of the random vector x on the expected value of y: the second-order part of the Taylor series of the function f ðxÞ is needed to take into account this effect. Example 10.21: Let’s assume that f ðxÞ = x 1 expð − x 2 Þ, where ½x 1 x 2 T is the random vector with the expected value ½μ1 μ2 T and covariance matrix: " Σx =

σ 21

c12

c12

σ22

#

The corresponding Taylor series has the following form: y = μ1 expð − μ2 Þ + expð − μ2 Þðx 1 − μ1 Þ − μ1 expð − μ2 Þðx 2 − μ2 Þ +

1 1 0ðx 1 − μ1 Þ2 + μ1 expð − μ2 Þðx 2 − μ2 Þ2 − expð − μ2 Þðx 1 − μ1 Þðx 2 − μ2 Þ + ... 2 2

It may be shown that the expected value of y = f ðxÞ can be approximated by:  μy ffi

μ1 +

1 μ1 σ22 − c12 expð − μ2 Þ 2

where the first term in the brackets is due to the first-order approximation of the expected value of y, the second to the second-order component originating in the nonlinearity of the function f with respect to x 2 and the third term – to the second-order component originating in the correlation between x 1 and x 2 .

57 inspired by: J. Stöckle, “Improvement of the Dynamical Properties of Checkweighers Using Digital Signal Processing”, Proc. Int. IMEKO TC7 Symposium on Measurement & Estimation (Bressanone, Italy, May 8–12, 1984), pp. 103–108.

10.7 Mitigation of uncertainty of scientific knowledge

215

An experimenter should be always prepared for some extraordinary perturbations in the measuring system or the system under measurement. They may be caused, e.g., by momentary disappearance of supply, by electrical discharge in the atmosphere, by vibrations of the floor (provoked by vehicles passing nearby the laboratory) or by human errors. Such perturbations, as a rule, are reflected on the results of measurement as outliers which may have destructive impact on those results. They are especially dangerous in the times of “big data” when enormous volumes of measurement data are acquired automatically in the absence of the detectors of outliers. The interest in algorithms for elimination of outliers may be traced back to the American mathematician Benjamin Peirce (1809–1880)58 who proposed the first criterion for this purpose in 1852 59. Numerous solutions to this problem have been found since then, recently within the field of research on data mining and machine learning. The purpose of this short paragraph on outliers is only to stress the methodological importance of this topic. The practical advice in this respect – an overview, taxonomy and description of the methods for dealing with outliers – may be found in the 2017 edition of the book Outlier Analysis by Charu C. Aggarwal60.

10.7 Mitigation of uncertainty of scientific knowledge The overview of the sources of qualitative and quantitative uncertainty of scientific knowledge, provided in Sections 10.1–10.5, may seem to support the attitude of anarchistic scepticism with respect to scientific knowledge. This is not, however, the intention of this chapter. Its aim is to give a realistic rather than idealised picture of the scientific method – to base research practice on more reliable foundations. The uncertainty of scientific knowledge is inevitable, but it may be assessed and to a certain extent mitigated. Over centuries of science development, some good practices have been worked out, which serve this purpose. The scientific method is like democracy: it is deficient in many ways, but nothing better has been proposed up to now. A piece of knowledge which is not accompanied by any informative characteristics of its uncertainty cannot be considered scientific. In the case of the results of measurement, these are indicators, such as standard or expanded uncertainties, determined in a way described in the previous section. These are similar indicators, computed on the basis of the results of measurement, in the case of mathematical modelling. Less evident is the methodology for expressing qualitative uncertainty related to observations made, applied patterns of logical reasoning, paradigms of confirmation etc. Nothing better has been invented up to now than detailed and precise characterisation 58 the father of Charles S. Peirce. 59 cf. the Wikipedia article “Peirce’s Criterion” available at https://en.wikipedia.org/wiki/Peirce% 27s_criterion [2018-07-26]. 60 C. C. Aggarwal, Outlier Analysis, Springer, Cham (Switzerland) 2017.

216

10 Uncertainty of scientific knowledge

(description) of research work (in particular experimental work), exhaustive enough to enable other researchers to repeat reported investigations. Thorough evaluation of uncertainty and publication of the results of this evaluation is a sine qua non condition of the effective functioning of the mechanism of intersubjective melioration of the body of scientific knowledge: elimination of its falsified or very uncertain pieces, and incorporation of new ones when the level of their uncertainty is sufficiently low. The principle of intersubjective verifiability, one of the core principles of scientific investigation, requires of the researchers a capacity to accurately communicate scientific ideas among them, to discuss about them and to put them under scrutiny of empirical tests. Each individual researcher has specific personal and professional features which may influence his research practice at each stage: selection of a subject of study, choice of research methodology, generation of hypothesis, planning of experiments etc. Therefore, not only the process of investigation but also its results and conclusions, derived from those results by different researchers, may differ. The rational implementation of the principle of intersubjective verifiability enables the research community to confront different methodologies of investigation, eliminate outliers (both of quantitative and qualitative nature) and correct minor errors, purify conceptual basis and the language of description, reach common understanding of the piece of reality under study. This principle implies the need for publication media and for a system of organised criticism; it generates numerous requirements towards members of research community, such as the replicability of research results, and the readiness to share the data or to follow the rules of rational discussion (to be presented in Subsection 17.2.1). The pieces of knowledge, agreed by a relevant community of researchers, are expected to be less uncertain than their prototypes put forward by any individual researcher (according to the probabilistic regularity illustrated with Example 10.10). There are a number of circumstances that foster the mitigation of uncertainty in scientific knowledge: the intellectual and moral qualities of researchers, the methodological plurality, the availability of empirical data, the abundance of meta-analyses and review articles and the use of research standards. Among the intellectual qualities of researchers, independence of thinking and criticism seem to be most important in the context of uncertainty mitigation. The first of them is a sine qua non condition of effective implementation of the principle of intersubjective verifiability, while the second – of efficiency in tracing errors and fallacies in the body of knowledge. Much self-criticism is needed to avoid publication of immature results to be able to look for difficult tests of new hypotheses rather than for quick completion of the contracted research task. The latter also requires moral courage and honesty to resist temptation of an easy formal success measured by the number of publications and the financial volume of research grants received. Since ethical issues have enormous impact on the uncertainty of scientific knowledge, they will be covered in considerable detail in the following chapters. Methodological plurality means the use of possibly many alternative methods of investigation to get more insight into the phenomenon under study or decrease

10.7 Mitigation of uncertainty of scientific knowledge

217

uncertainty of measurement results. An example could be the use of various analytical techniques (such as chromatography, infrared spectrophotometry, mass spectrometry, etc.) for the evaluation of the chemical composition of food products. Some of those techniques are more sensitive to organic compounds, others – to non-organic compounds; some have better resolution among members of a family of compounds specific of the analysed product etc. Availability of diversified empirical data in abundance is always an advantage; especially, if the reduction of measurement uncertainty is concerned; roughly speaking, the standard uncertainty is diminishing with the square root of the number of data points used for estimation of the final result of measurement. A meta-analysis is a study aimed at combining the results of multiple scientific studies of the same subject, which have been carried out by various research groups or various research institutions. As a rule, statistical aggregation and analysis of evidence, acquired by all those groups or institutions, is the core of any meta-analysis, especially in the domain of medical sciences and pharmacology where the results of several clinical trials of a medical treatment are analysed with the aim to better understand of how this treatment works. The systematic use of meta-analyses is not only contributing to mitigation of knowledge uncertainty but also to the enhancement of research methodology by comparing efficacy of various techniques of experimentation and identifying patterns of their application. Example 10.22: The meta-analyses may reveal some bias in the results of research (i.e. a systematic component of their uncertainty) associated with the source of its financing. The authors of a 2003 article, published in a renowned medical journal61, tried to answer the question whether funding of drug studies by the pharmaceutical industry is associated with outcomes that are favourable to the funder. On the basis of the meta-analysis of the results of 30 studies, they concluded that the results of research funded by drug companies were less likely to be published and more likely to be favourable for the sponsor than the results of research funded by other institutions. This conclusion has been confirmed in a report on the outcome of a broader 2017 meta-analytical study62.

In some disciplines, e.g., in physics or engineering-related sciences, meta-analyses are less popular than in medical sciences and pharmacology; the practice of publishing review articles plays similar role there. It should be noted that, when performing a meta-analysis or equivalent comparative study, an investigator must undertake methodological decisions which can affect the results; e.g., he must decide how to

61 J. Lexchin, L. A. Bero, B. Djulbegovic, O. Clark, “Pharmaceutical Industry Sponsorship and Research Outcome and Quality: Systematic Review”, British Medical Journal (BMJ), 2003, Vol. 326, pp. 1167–1170. 62 A. Lundh, J. Lexchin, B. Mintzes, J. B. Schroll, L. Bero, “Industry Sponsorship and Research Outcome”, Cochrane Database of Systematic Reviews, 2017, No. 2, http://dx.doi.org/10.1002/14651858. MR000033.pub3 [2017-09-27].

218

10 Uncertainty of scientific knowledge

search for studies to be compared, or how to deal with incomplete or incomparable data. This is a source of “second-order” uncertainty. Each discipline has its specific vocabulary, specific conventions concerning experimental minima and specific norms of professional conduct. On top of these standards, there are some common norms associated with the use of common means, such as tools of information technologies, including measurement technology. If the latter are concerned, the VIM and the GUM are good examples. The conscious, precise and consistent use of all those standards has enormous impact on the communication in scientific communities, and therefore on effectiveness of intersubjective verification (for a realist) or validation (for an instrumentalist) of scientific knowledge. Since the mitigation of uncertainty of scientific knowledge is an important dimension (one of the criteria) of the progress of science, the institutional solutions, which constrain research activities aimed at reduction of uncertainty, should be considered as institutional enemies of this progress. The following among them are most common: – socio-political pressure on immediate application of knowledge, especially on its commercialisation (especially dangerous in biomedicine and pharmacology), – bureaucratic preference for formal criteria (such as bibliometric indicators in the procedures for evaluation of research achievements, research institutions and individual researchers), – underestimation or total negligence of critical activity in those procedures, – overestimation of the economic impact of research results in those procedures, – exclusion of substance-related discussion in the processing of grant applications, – bureaucratic waste of time of the most qualified research personnel, – reluctance of editors to publish the results of replicated studies, – reluctance of editors to publish negative research results. If the latter point is concerned, there are interesting counterexamples of journals which by definition of their mission publish negative results. Example 10.23: Both Swiss and American pharmaceutical industry is under international pressure to publish all the results of studies over their products. Despite legal regulations in force, many results are hardly available to the general public. There is, however, a special journal for negative results, viz., Journal of Pharmaceutical Negative Results (www.pnrjournal.com) – an official publication of Scibiolmed.Org63. This is a peer-reviewed journal developed to publish articles on original and innovative pharmaceutical research concluded with negative findings. Scibiolmed.Org is also the publisher of Journal of Contradicting Results in Science (http://www.jcrsci.org) – a peer-reviewed journal developed to publish theoretical and empirical articles that report findings contradicting to the established knowledge.

63 This is a non-profit private organisation dedicated to research in the field of medicine, biology and other sciences; its long-term objective is to provide high-quality, accurate and required information to enhance research and innovative concepts in scholarly publishing (more information at www.scibiolmed.org).

11 Basic concepts of Western ethics In December 1937, Edwin G. Conklin (1863–1952), the retiring President of the American Association for the Advancement of Science, delivered an address in which he stressed the importance of ethics for science: “But there is an aspect of religion with which science is vitally concerned, viz., ethics, and this has been well called ‘the religion of science’ ”1. In this way, he also indicated that the role of ethics in research practice may be similar to that of religion in everyday human experience. Regardless of the attitude with respect to religion, any researcher of the twenty-first century is in a desperate need of ethical guidance, for at least two reasons: because of the atrocities which happened in the twentieth century as the aftermath of various abuses of scientific achievements in social practice, and because of the growing power of technoscience, which may endanger the future of Homo sapiens if not subjected to ethical harnessing. Ethical issues related to technoscientific research are treated by numerous handbooks entirely devoted to them 2. They appear, however, also in the handbooks devoted to philosophy of science or research methodology as a natural complement of methodological issues 3. In this book, ethical and methodological skills are treated as equally important components of the researcher’s competences. The sequence of chapters, starting here, does not provide a systematic presentation of any coherent system of research ethics, derived from a monistic approach to ethics, such as kantism, utilitarianism or ethics of virtue. An attempt to follow any of them would be doomed to failure for several reasons: – The practical purpose of ethical reflection is to solve moral problems that are getting increasingly common in research practice due to its growing complexity. – There is no ethical system that would serve this task in a way not provoking doubts or controversies of logical or moral nature. – Each of the ethical systems, we can read about in the textbooks of the history of philosophy, is emphasising some important aspects of thinking about morality, and therefore can enrich the repertoire of intellectual tools potentially useful

1 E. G. Conklin, “Science and Ethics”, Nature, 1938, No. 3559, pp. 101–105. 2 e.g. E. Agazzi, Right, Wrong and Science: The Ethical Dimensions of the Techno-scientific Enterprise, Rodopi B.V., Amsterdam – New York 2004 (Ed. C. Dilworth); D. Koepsell, Scientific Integrity and Research Ethics, Springer, Cham (Switzerland) 2017. B. Macfarlane, Researching with Integrity: The Ethics of Academic Enquiry, Taylor & Francis, New York – London 2009; F. L. Macrina, Scientific Integrity: Text and Cases in Responsible Conduct of Research, ASM Press, Washington D.C. 2014 (4th edition); H. Mustajoki, A. Mustajoki, A New Approach to Research Ethics: Using Guided Dialogue to Strengthen Research Communities, Routledge, Abingdon (UK) 2017; A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, Oxford University Press, New York 2015 (3rd edition); C. N. Stewart, Research Ethics for Scientists: A Companion for Students, Wiley & Sons, Chichester (UK) 2011. 3 e.g. P. Pruzan, Research Methodology: The Aims, Practices and Ethics of Science, Springer, Cham (Switzerland) 2016, Chapter 10. https://doi.org/10.1515/9783110584066-011

220

11 Basic concepts of Western ethics

for solving moral problems. None of them, however, is providing a set of norms that can be directly applied to specific situations. – Due to the globalisation of science, the research teams are getting more and more interdisciplinary, more and more international and intercultural. Consequently, when reflecting on moral issues, we are obliged to take into account – today more than in the past – the differences in hierarchies of moral values of the team members and the diversity of their moral convictions. The ethics of dialogue and the ethics of discourse are getting, therefore, more and more important tools allowing for morally justified decisions despite this diversity.

11.1 Motivations and objectives Systematic moral reflection has been accompanying the peoples of Europe for at least 25 centuries, but its importance for science has not been noted until the nineteenth century, and the first handbooks on research ethics appeared only in the middle of the twentieth century. The interest in the idea of responsible research grew significantly at the turn of the twentieth and twenty-first centuries, especially in the USA where the Office of Research Integrity4 began funding of a number of research projects and conferences devoted to this topic, and the National Science Foundation5 set new requirements in this regard – the requirements which contributed to the introduction of appropriate changes by the institutions of higher education in the curricula and in the principles of conducting scientific research, as well as to the adoption of new publishing rules by the editors of scientific journals. This was a constructive response to the growing wave of spectacular cases of abuse – mainly in the field of nanotechnology, stem cell research and clinical trials – reported by the media. This does not mean, of course, that ethical issues have emerged ex nihilo in the scientific community only recently. As early as in 1830, Charles Babbage (1791–1871) – the English mathematician who invented a programmable mechanical computer – wrote a book on morally dubious practices in British science 6. But only 150 years later, the American science journalist William J. Broad (*1951) and the British writer Nicholas Wade (*1942) published the book Betrayers of the Truth: Fraud and Deceit in the Halls of Science 7 which systematically and uncompromisingly exposed moral weaknesses of the scientific community. They put under scrutiny morally doubtful behaviours of many important figures of modern science, including Isaac Newton (1642–1726), John

4 The website of this institution is located at https://ori.hhs.gov/ [2018-05-25]. 5 The website of this institution is located at https://www.nsf.gov/ [2018-05-25]. 6 C. Babbage, Reflections on the Decline of Science in England, and on Some of Its Causes, B. Fellowes, London 1830. 7 W. J. Broad, N. J. Wade, Betrayers of the Truth: Fraud and Deceit in the Halls of Science, Simon & Schuster, New York 1982.

11.1 Motivations and objectives

221

Dalton, Gregor J. Mendel (1822–1884), Louis Pasteur and Robert A. Millikan (1868– 1953). Most importantly, however, they showed in the bright light the ethical issues of the twentieth-century science. After introduction of their book in the circulation of technoscientific information, publications stigmatising the perpetrators of morally reprehensible research practices started to appear more and more frequently – also in the form of textbooks. Example 11.1: The authors of the book Responsible Conduct of Research8 cite a long list of scientific scandals that have shaken the American public opinion during the recent half-century. Here are three of them: – In 1974, William Summerlin admitted to fabricating data in skin transplant experiments he was conducting at the Sloan Kettering Institute in New York 9. – In 1984, Robert C. Gallo and Mikulas Popovic, from the National Cancer Institute (USA), published the first article on the human immunodeficiency virus (HIV). Luc A. Montagnier from the Pasteur Institute (France), with whom Robert C. Gallo previously collaborated, accused him of stealing the virus. The case has never been fully clarified because of the lack of due diligence in running laboratory journals 10. – In 2006, a group of Ph.D. students at the University of Wisconsin accused their doctoral adviser, the geneticist Elizabeth Goodwin, of committing falsification of data; the official investigation did confirm the allegations. This incident had negative effect on the careers of several Ph.D. students: three of them quit the university, one student moved to another university, and two others started their graduate education anew 11. The summaries of 29 more recent cases dated 2015–2018 may be found on the website of the US Office of Research Integrity 12.

It is easier to understand the importance of ethics for scientific research when recognising its role in the history of Europe. The American political scientist Charles A. Murray (*1943), in his book Human Accomplishments 13, has mentioned ethics among human meta-inventions that have decided about the worldwide supremacy of Western civilisation. Similarly, the French political philosopher Philippe Nemo (*1949) in his book Qu’est-ce que l’Occident?14 has pointed out five “miracles” that contributed to this supremacy: the invention of city-state, science and school; invention of legislation, private property and personal rights; Judeo-Christian ethical revolution; the papal “revolution” of the eleventh and twelfth centuries; and the development of liberal democracy. So, again ethics has appeared among contributing factors. The

8 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition). 9 ibid., p. 29. 10 ibid., p. 31. 11 ibid., p. 34. 12 located at https://ori.hhs.gov/case_summary [2018-05-25]. 13 C. Murray, Human Accomplishment, 2003. 14 P. Nemo, Qu’est-ce que l’Occident ?, Presses universitaires de France, Paris 2013.

222

11 Basic concepts of Western ethics

historiosophical observations, made by Charles A. Murray and Philippe Nemo, suggest a hypothesis that as much as the appreciation of moral values contributed to the rise of Western civilisation, their depreciation may bring its destruction. It seems that at the turn of the twentieth and twenty-first centuries serious concerns emerged. The postmodern crisis of traditional values has dramatically revealed itself on a global scale in the sphere of everyday life, in the media and politics, in industry and business, as well as in technoscientific activities. Pandemic violations of the principle “so much freedom, as of responsibility” intensify this crisis not only in the sphere of ideas, as it appeared already in the second half of the twentieth century, but also in everyday life: the abuse of political freedom has led to the crisis of democratic systems, the abuse of economic freedom – to the crisis of free-market economy, and the abuse of freedom of scientific research – to the crisis of the institutions of science. The crisis of the concept of truth, being the central value of science, has manifested itself not only in philosophy (cf. Section 2.4) but also in everyday life. The negative tendencies, which have appeared in the media of social communication, are of key importance for the persistence of the crisis situation in all the domains of social practice, including scientific research. Attempts to change this situation by adopting codes of professional ethics and introducing ethics courses into university curricula, although important, have turned out to be insufficient. It is worth noting the following phenomena that have emerged in the sphere of academic ethos: – the disappearance of everyday ethical reflection on research practice and the tendency to replace it with formal regulations or declarations expressed in the form of professional codes of conduct; – the manipulative usage of ethical categories in the struggle for limited material resources of science and for professional positions and honours. The first phenomenon is associated with increasing performance pressures, the research milieus are exposed to, and with a widespread conviction of those milieus about the uselessness of philosophical and methodological competences in science; the second – with the penetration of the methods of business, marketing and political struggle to the sphere of technoscience. Those circumstances do not remain without impact on young researchers. The internet fora, where they freely express their opinions, reveal the level of their frustration and the level of dislike or even contempt for the elder colleagues who are considered to be responsible for this situation. The experience of the last half-century – related to the massification and globalisation of technoscience, and to its entanglement in complex alliances with the sphere of industry, business and politics – seems to indicate the urgent need for strengthening the links between research practice and ethics. Technoscience is in a state of multifaceted crisis threatening its identity and mission. However, the crisis should not be perceived as a threat only, but also

11.1 Motivations and objectives

223

as an opportunity15: the threat of a collapse of the system of traditional values of science and a chance for its renaissance, created by increased interest in moral issues, already observed in some technoscientific milieus. The “commodification” of science seems to be the most important cause of the moral crisis of science since it is re-orienting the entire scientific community from the pursuit of scientific excellence towards the pursuit of material benefits, entailing the actual privatisation of knowledge which – according to the best tradition of science – should be a universal good. The second important cause of the crisis is the dissemination of postmodern scepticism with respect to the scientific method, quite frequently based on shallow interpretation of the twentieth-century philosophical debates over this method (outlined in Chapters 4–10). It is also noteworthy that in the last half-century the inventory of research problems has grown faster than the resources that the economically developed countries could allocate for their investigation. At the same time, the organisational complexity of the research institutions and their networks has grown rapidly: the importance of international and interdisciplinary research teams, as well as of industry-academe collaboration, has increased enormously; the widespread use of information technologies has enabled efficient coordination of their functioning. All those changes have generated new sources of the conflicts of values and reduced the time available for methodological and ethical reflection aimed at their intellectual accommodation16. The lack of such reflection has been the source of many erroneous decisions regarding the planning and funding of research and, consequently, the indirect cause of waste of material resources (always limited, even in the richest countries of the world) that may be allocated for research. Research ethics refers to the same moral values we appreciate in everyday life, i.e. to honesty, truthfulness, respect for the person, openness, objectivity, etc.; that’s why, Chapter 12 is devoted to an overview of the Western tradition of general ethics. Good research practices, based on those values, are directly related to the good practices of everyday life – but are not obvious to a layman or a novice in science, especially if the standards specific to a selected scientific discipline are concerned17. Chapters 13–19 of this book are devoted to the research practices considered to be good from both ethical and methodological points of view. Those chapters are oriented on the development of intellectual tools, useful in addressing specific ethical issues that arise in research practice, rather than on the presentation or interpretation of a particular ethical system or of a particular code of

15 in accordance with the Greek origin of the word: krisis = “turning point in a disease”. 16 cf. On Being a Scientist: Responsible Conduct in Research, Committee on Science, Engineering, and Public Policy (appointed by National Academy of Sciences, National Academy of Engineering, and Institute of Medicine), Washington D.C. 2009, http://www.nap.edu/catalog/12192.html [2010-05-12], p. XI. 17 cf. ibid., p. 3.

224

11 Basic concepts of Western ethics

ethics. Those tools should facilitate the analysis of situations of moral significance, aimed at providing justified answers to such questions as: – “To do or not to do research work for military industry?” – “To what extent is it acceptable to replace expensive real-world data with lowcost synthetic data?” – “What is the morally acceptable scope of testing such products as planes whose failure may cost hundreds of human lives?” – “What is a reasonable number of articles to be published as a final result of a research project?” – “What is wrong in underestimation of the costs of a project in a grant application, motivated by the risk of being not financed?” The methodological approach to research ethics, developed in the following chapters, can be summarised as follows: – The twenty-first-century man, especially the man of science, is striving for mastering the ability to accurately predict the effects of human actions, and to use this information for evaluation of those actions in terms of good and evil, i.e. in terms of ethics. There are, however, numerous obstacles on the way towards that goal, related to the uncertainty of prediction and to the choice of evaluation criteria. – If these obstacles are of minor importance, then ethical reflection can refer to the ethics of consequences; otherwise, it must be based on other grounds. In some cases, this can be the ethics of duty, viz. when past experience has enabled identification of relevant rights and duties. In some other cases, especially in very new fields of research, the decisions must be based on intuition, preferably on the intuition of morally “verified” persons sticking to the values which are, more or less universally, recognised as virtues. – In all three cases, two factors – the uncertainty of prediction and heuristic nature of evaluation criteria – are of key importance. Philosophy of science, metrology, theory of decision-making and expert knowledge are important sources of objective information useful for ethical considerations. Nevertheless, there remains a margin of subjectivity that must undergo the process of objectivisation (intersubjectivisation) through the discourse among the members of a decision-making body, meeting ethical requirements of rational discussion (as described in Section 17.2)

11.2 Elements of metaethics and general ethics The core concepts of ethics are so fundamental that defining them in terms of more basic concepts is highly problematic if not impossible. At the same time, they are very abstract; so, they should not remain without explanation. This difficulty may

11.2 Elements of metaethics and general ethics

225

be alleviated by referring to the contexts of their usage showing their logical relation to other (possibly also very abstract) concepts and to their specific exemplifications. The British moral philosopher Bernard A. O. Williams (1929–2003) proposed to use the label sparse concepts for such concepts as good and bad because they are so general that they can be fulfilled with the meaningful content only if associated with a specific philosophical system18. The list of good things and actions, compiled by a person whose purpose of life is mundane happiness, will differ from such a list arranged by a person oriented on eternal life. For this reason, basic ethical concepts are introduced not only by means of intensional definitions (cf. Section 2.3) but also by the contexts relating them to a network of other concepts and by encyclopaedic outlines of their historical development.

11.2.1 Good, value, morality The concept of good was originally associated by ancient Greeks with material property and material benefits. Only with the development of philosophy, it has acquired a more abstract meaning, including all that is useful, valuable or beneficial, and thus also prosperity, success, happiness and virtue. The nature of good is a subject of philosophical disputes concerning both the way of its existence and its permanence, as well as the possibility of its definition. The British analytical philosopher George E. Moore (1873–1958) argued, for example, that we learn what is good in an intuitive way – just as we learn what a certain colour is – and that we can formulate many meaningful statements about the good, none of which being its definition19. The source of the intuition which enables us to quickly and directly recognise good is most probably located in our phylogenetic and ontogenetic memory – i.e. in the memory of evolutionary experience inherited from the ancestors – and in the memory of socialisation experience accumulated in various brain structures from the moment of birth. Our awareness of the content of both kinds of memory is limited, but it plays an important role in our lives, manifesting itself in instinctive behaviours and cognitive processes we usually call intuitive. Already in the seventeenth century, a tendency appeared to limit the denotation of the concept of good, in its abstract (metaphysical) meaning, to moral good. In the nineteenth century, this concept began, mainly under influence of German philosophers, to be replaced with the concept of value. The latter is usually applied to what is or should be appreciated, what is the object of our desires or aspirations, what satisfies our needs. Although it is spontaneously understood in the context of 18 J. Baggini, P. S. Fosl, The Philosopher’s Toolkit: A Compendium of Philosophical Concepts and Methods, Blackwell, Oxford (UK) 2010, Section 4.17. 19 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, Blackwell, Oxford (UK) 2007, Section 1.13.

226

11 Basic concepts of Western ethics

the spoken language, its meaning in various spheres of life (economy, politics, ethics, psychology, science, etc.) is very fluid – up to the limits of ambiguity. The philosophers argue about how we recognise values and how they exist. Example 11.2: Advocates of the philosophical orientation called naturalism20 assume that there is only material reality (nature), and that spiritual reality does not exist, or may be reduced to nature. In the context of naturalism, which explains all phenomena by the action of the laws of nature, a value is an immanent feature of an appreciated object (e.g. beauty is a feature of a masterpiece of art) – the feature similar to its empirical qualities (such as colour). The opponents of naturalism, pointing out that the methods of conducting research in social sciences and humanities are different from the methods applied in natural sciences, deny the possibility of explaining values purely by the laws of nature.

Morality21 is a collection of views and beliefs of a community, which influence interpersonal relationships. Due to the difficulty of pointing out the unequivocal criterion, according to which some social and consciousness-related phenomena are included in the category of moral phenomena, it is difficult to strictly define morality. The above definition can, however, be extended with the proposition that the basic elements of morality are the injunctions and prohibitions, personal patterns and ideals that govern the relationship between two individuals, between an individual and a social group, or between two social groups. Morality is part of a set of more-or-less conscious principles and inspirations which enable a person to distinguish between good and evil, or between right and wrong. Moral norms are imperative components of morality and of ethical systems. Two kinds of norms should be distinguished: categorical norms and hypothetical norms. The first of them are given without justification and are binding unconditionally; the norm “do not kill” is an example of them. The hypothetical norms are justified by the purpose to which they should serve, and are binding if the purpose is accepted (e.g. the norm “do not lie unless you have to do it for your safety”); more precisely, those norms “order or permit or prohibit a certain mode of action to some subject(s) on some occasion(s) assuming that the occasion(s) satisfy certain conditions – in addition to providing an opportunity for performing the action” 22. Moral judgments are formulated on the basis of moral norms; judgments and norms are equivalent in the sense that what is good is morally required, and what is morally required is good. Moral norms indicate how a person should behave,

20 cf. the Wikipedia article “Naturalism (philosophy)” available at https://en.wikipedia.org/wiki/ Naturalism_(philosophy) [2018-05-27]. 21 from Latin moralis = “custom”. 22 G. H. von Wright, “IX: Deontic Logic: Hypothetical Norms”, [in] Norm and Action – A Logical Inquiry, Routledge & Kegan Paul, 1963, https://www.giffordlectures.org/books/norm-and-action/ixdeontic-logic-hypothetical-norms [2018-07-27].

11.2 Elements of metaethics and general ethics

227

with the essential moral sanction being the inner conviction of the moral goodness of a conscious and voluntary action performed by that person.

11.2.2 Ethics and metaethics Ethics23 is a philosophical discipline oriented on the creation of theories of morality. The relation of ethics to morality is like the relation of physics to physical phenomena: morality is a social phenomenon encompassing human behaviours as perceived from the point of view of good and evil; ethics is a theory or collection of theories modelling that social phenomenon. The purpose of ethical considerations is to find methods for distinguishing good from evil, right from wrong, fair from unfair, responsible from irresponsible, permissible from forbidden. The ethical reflection is focused on a human action as the cause of moral good and evil, and on the perpetrator of this action – the features of his character and his internal states accompanying the action24; so, ethics deals with eternal issues of duty, honesty, virtue, justice and good life. In everyday language, the noun “ethics”, followed by a plural verb, is used in the sense of moral principles that govern a person’s behaviour or the conduct of an activity, as in the sentence: “The ethics of experiments with human involvement are much debated.” There are also other marginal meanings of the notion of ethics, e.g. ethics as an approach to decisionmaking and ethics as a state of character. The following distinctions among particular types of ethics provide further explanation of the concept of ethics: – descriptive ethics (also called ethology) deals with the description and explanation of moral phenomena (facts) in terms of philosophy, while normative ethics deals with philosophical analysis of moral norms and judgments; – general ethics is developed without indication of any specific field of application, while applied ethics is focused on a selected area of application, in particular – on an area related to a specific profession, such as that of medical doctor, researcher or politician; – individual ethics refers to the relationship of an individual to another person or to a group of people, while social ethics – to the relationship of a group of people to a single person or to another group of people. Normative ethics, starting from some assumptions of philosophical nature (in particular – anthropological assumptions), constructs and justifies systems of moral 23 from Greek ethos = “custom”. 24 cf. A Guide to Teaching the Ethical Dimensions of Science, Online Ethics Center for Engineering, National Academy of Engineering, 2016, www.onlineethics.org/Education/precollege/scienceclass/ sectone.aspx [2018-06-01].

228

11 Basic concepts of Western ethics

norms (principles and rules); a key role is played in this process by the choice of the highest good (called in Latin summum bonum) and of the hierarchy of other values. Those assumptions, external with respect to the ethical system “under construction”, are of interest to metaethics – the second, alongside ethics, constitutive component of moral philosophy. Metaethics 25 deals with the philosophical analysis of moral norms and judgments; mainly with the logical analysis of the language of ethics and of its methodology. It is involved, in particular, in the logical analysis of the meaning and function of ethical predicates (such as “good” or “right”) and of the logical status of moral truth, moral judgments and their justification. Example 11.3: Here are some typical questions of metaethical nature: – “What does it mean good”? – “How do we know that something is good?” – “How do moral attitudes motivate action?” – “Are there any objective (absolute) moral values?” – “What is the source of moral principles and values?”

One of the key metaethical issues is the universality of moral norms: according to the objectivists, there are some moral norms that apply to all people of all times in every situation; according to the relativists, such universal norms do not exist. The way of justifying ethical judgments and norms is another key issue of metaethics. One can distinguish four basic positions in this regard: (metaethical) naturalism, (metaethical) conventionalism, (metaethical) emotivism and (metaethical) intuitionism. Here are their synthetic characteristics and most prominent representatives: – According to naturalism, moral good can be learned only through empirical means, enabling the identification of objective values related to a certain state of things. This stance is strictly related to the order of nature, understood in an ontological or biological way. Various forms of naturalism were promoted by the British philosopher G. Elizabeth M. Anscombe (1919–2001), the American philosophers Richard N. Boyd and David O. Brink (*1958), as well as by the Australian philosopher Frank C. Jackson (*1943).26 – Conventionalism relates to the doctrine of social contract, created by the English philosopher Thomas Hobbes (1588–1679), and developed by the French writer and philosopher Jean-Jacques Rousseau (1713–1778). According to this doctrine, people agreed to follow moral norms to overcome the original hostility among

25 from Greek meta = “after” or “beyond”+ ethos = “custom”. 26 cf. the Wikipedia article “Ethical naturalism” available at https://en.wikipedia.org/wiki/Ethi cal_naturalism [2018-05-29].

11.2 Elements of metaethics and general ethics

229

them. It is worth being noted how often the terms related to convention appear in the publications devoted to metaethics .27 – Emotivism associates moral judgment and norms with a specific form of moral feeling; it claims that the notion of good is deprived of any deeper content and plays only an instrumental role which involves expressing our emotions and inducing them in our interlocutors. On the grounds of emotivism, ethical judgments are individually subjective; so, they cannot be evaluated in terms of veracity, validity or correctness; ergo, normative ethics is impossible. Emotivism was promoted by the British philosophers Alfred J. Ayer and Richard M. Hare (1919–2002), as well as by the American philosopher Charles L. Stevenson (1908–1979) .28 – According to intuitionism, moral good can be learned without the help of reason, thanks to our specific cognitive ability which is moral intuition; ethical claims are not empirically verifiable, and ethical values are objective: they exist independently of us, but we can recognise them through intuition. Intuitionism was promoted by the British philosophers Henry Sidgwick (1838– 1900) and George E. Moore, as well as by the American philosopher Robert Audi (*1941).29 Today, the above-listed basic positions appear in their pure forms rather exceptionally, but the rationale behind each of them has its place in the everyday ethical discourse: its participants differ in hierarchy of importance they attribute to each basic position, rather than in excluding three of them in favour of the fourth. In science, for example, many principles of research ethics may be viewed as conventions fostering research cooperation; on the other hand, however, many representatives of empirical disciplines are inclined to extend on moral issues their way of reasoning about natural phenomena. Metaethics is a theoretical reflection on ethics itself, as well as on its foundations and methodology. It is sometimes treated, however, as a chapter of ethics, or even as the only proper content of ethics. It seems that such approaches to metaethics can generate logical difficulties resulting from incorporating metalanguage into language; so, they will not be referred to in this book.

27 cf. for example, the articles “Metaethics” in Internet Encyclopedia of Philosophy, available at http://www.iep.utm.edu/metaethi/ [2018-05-29], and in Stanford Encyclopedia of Philosophy, available at https://plato.stanford.edu/entries/metaethics/ [2018-05-29]. 28 cf. the Wikipedia article “Emotivism” available at https://en.wikipedia.org/wiki/Emotivism [2018-05-29]. 29 cf. the Wikipedia article “Ethical intuitionism” available at https://en.wikipedia.org/wiki/Ethi cal_intuitionism [2018-05-29].

230

11 Basic concepts of Western ethics

11.2.3 Monistic approaches to ethics The diversity of possible answers to metaethical questions has given rise to a great variety of approaches to ethics that have emerged in Western culture over the past 25 centuries (to be overviewed in the next chapter). A closer analysis of this variety makes it possible, however, to note that all those approaches are referring “in various proportions” to the following three model types of ethics: – virtue ethics which recommends the pursuit of virtue understood as a permanent trait of the person’s character, deserving appreciation30; – deontological ethics (also called ethics of duty), which recognises an act as ethically good if it fulfils an obligation or a right; – consequentialist ethics (also called ethics of consequences), which evaluates an act solely on the basis of its effects. All these model types of ethics have a common denominator: moral values, i.e. the values behind the virtues, the values underlying the duties and the values associated with the consequences of an act. The common denominator of all values is good; striving for it is the raison d’être of ethics. The identification of a common denominator of the model types of ethics has not, however, prevented the dramatic philosophical disputes among their representatives, especially the representatives of deontological and consequentialist versions of ethics. In those disputes, ethical ideas can be evaluated and compared using criteria similar to those used in science, such as rationality, consistency or even utility. The important difference is, however, that in science it is possible to carry out an experiment, and to use its result as evidence speaking in favour of or against a hypothesis or theory, while in ethics only “thought experiments” are possible 31. Virtue ethics. A virtue32 is a permanent disposition of the human mind and will to good behaviour. Since the times of Aristotle, two categories of virtues are distinguished: intellectual virtues (such as wisdom, prudence or intellectual intuition) and moral virtues (such as justice, moderation or fortitude). The first of them are mainly related to intellectual aspects of human activity, while the second – to the practical or vegetative or appetitive aspects of life. Dozens of virtues of both categories have been identified over the two millennia of Western philosophical tradition33; many of

30 N. Athanassoulis, “Virtue Ethics”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), http://www.iep.utm.edu/virtue/ [2017-10-27]. 31 B. A. Fuchs, F. L. Macrina, “Ethics and the Scientist”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 19–37. 32 from Latin vir = “man”. 33 A list of 74 virtues (and their definitions) may be found at https://www.virtuesforlife.com/vir tues-list/ [2017-11-06]; other lists are available at https://www.virtuescience.com/virtuelist.html, http://www.52virtues.com/virtues/the-52-virtues.php,https://en.wikiversity.org/wiki/Virtues, http://www.worldlanguageprocess.org/comic%20books/virtues%20list.htm [2018-07-07].

11.2 Elements of metaethics and general ethics

231

them may be classified as both intellectual and moral virtues. Intellectual virtues are qualities of mind and character that support critical thinking and the pursuit of truth. Among them, intellectual responsibility, intellectual courage and intellectual humility are frequently pointed out as counterparts of three virtues – responsibility, courage and humility – which are understood as moral virtues even without the adjective “moral”. The so-called virtue responsibilists – e.g. the American epistemologists Ernest Sosa (*1940) and John Greco (*1961) – stress the primordiality of acquired character traits, such as intellectual conscientiousness or love of knowledge. By contrast, the so-called virtue reliabilists – e.g. the Canadian epistemologist Lorraine Code (*1937) or the American philosopher James A. Montmarquet (1947–2018) – are inclined to associate intellectual virtues with mental faculties such as memory or intuition. One develops virtues by practicing them. Although the notion of duty is not particularly exposed in the theory of virtue, it is difficult to imagine a virtuous man who does not fulfil his moral duties. The nouns “ethics” and “morality” are derived from Greek and Latin words which mean “custom” or “habit”. The virtue ethics is the closest among the three model types of ethics to this etymological understanding of morality since habits play a central role in shaping the character of a person. It is through habits that both emotional and intellectual predispositions are formed, strengthened, disciplined and moderated. Preservation of habits is accomplished within a community; such social institutions as the family, the religious group and the school – as well as, to some extent, the state institutions and media – play a significant role in this process. Among numerous virtues, cultivated in Western culture, integrity – understood as the coherence of the person’s beliefs, declared views, decisions and actions – plays a special role in research ethics. It should be, however, noted here that the research integrity is a concept whose semantic scope is much broader (cf. Section 13.1). Deontological ethics. According to deontological34 ethics in its pure form, the basic norm of morality is to fulfil the duty imposed by an acknowledged authority (a god, a spiritual leader, a legislator, etc.). Thus, the only basis for a morally indifferent act is a duty – not one’s satisfaction or the prospects for other benefits. This usually implies a convenient limitation of the scope of moral responsibility to completing a number of predefined duties without necessary involvement in considering morally difficult matters which fall outside of that scope. On the other hand, this may also facilitate the widespread use of deontological ethics – also by persons without propensity for deeper moral reflection. These are probably the reasons explaining why the oldest ethical systems are of deontological nature (cf. Subsection 12.1.1). This limitation of moral responsibility, being a distinctive feature of deontological ethics, is preventing a community from imposing unrealistic requirements on its members:

34 from Greek deon = “duty”.

232

11 Basic concepts of Western ethics

indeed, nobody is able to satisfy all the needs and to deal with all the kinds of evil 35. The systems of moral thinking that refer to the non-trivial rights of a person may be considered as particular cases of deontological ethics since those rights give rise to moral obligations of other persons, groups of persons or institutions with respect to the beneficiaries of those rights 36. Deontological ethics recognises an act (e.g. the murder of an innocent person) as evil, irrespective of the favourable balance of its consequences, if the primary purpose of that act is evil and if it is completed consciously, freely and voluntarily. This means that moral evaluation of an act is taking into account the intention of the actor – both in the colloquial and philosophical sense of this term: as the aim or purpose of acting, and as the determination to act in a certain way 37. On the basis of deontological ethics, an act can be regarded as morally good if it has been undertaken with an intention to make it compliant with moral norms. Deontological ethics is most often criticised for its rigorism which, in the case of a conflict of values, can induce somebody to fulfil a trifling duty when it would be better to voluntarily rescue a human life 38; such a situation is possible, for example, on the grounds of the Immanuel Kant’s formal ethics (cf. Subsection 12.2.3). Contemporary versions of deontological ethics take this difficulty into account – at least up to some extent. It would be unreasonable, as the Polish ethicist Barbara Chyrowicz (*1960) points out, to require people to always do certain things or to prevent them from undertaking some other predefined actions: the dispute between the protagonists of deontological and the consequentialist ethics is not about whether the effects of an action should be taken into account in its moral evaluation, but about identification of the effects we are responsible for and about whether the effectiveness of an action can be the basis for judging its moral rightness 39. Consequentialist ethics. According to consequentialist ethics, no act is good or evil in itself, but only because of the consequences that it entails, e.g. due to social benefits resulting from it. The consequentialist criterion of the evaluation of a human action refers to the balance of favourable and unfavourable effects, or in the language of economics – of profits and losses. Consequentialist ethics appears today most frequently under the name of utilitarianism (cf. Subsection 12.2.4). On

35 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 2.4. 36 T. Campbell, “Rights”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010. 37 from Latin intendere = “to stretch out”. 38 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 2.4. 39 B. Chyrowicz, “Argumentacja we współczesnych debatach bioetycznych”, [in] Etyczne i prawne granice badań naukowych (Ed. W. Galewicz), Wyd. Universitas, Kraków 2009.

11.2 Elements of metaethics and general ethics

233

the grounds of consistent utilitarianism, it is permissible to sacrifice one man to save five, for example, a healthy young man whose organs could be offered to five persons being in need of urgent transplantation: the death of a man is perceived as the lesser evil than the death of five men. On the grounds of deontological ethics, if any analysis of the consequences of an act is undertaken, then it is limited to minimisation of losses constrained by the absolute priority of moral obligation: since the violation of any moral norm is excluded, the minimisation of losses can only apply to non-moral evil 40. On the grounds of consistent utilitarianism, human life has no absolute and incomparable value; it is subject to evaluation referring to such criteria as intelligence or physical fitness; so, the arguments related to the quality of life can play a decisive role in resolving the ethical dilemmas concerning human life. From a utilitarian point of view, imprisonment of an innocent man is acceptable if it can be useful to the community, e.g. if it can prevent riots. On the basis of deontological ethics, such action is unacceptable, because it defies the dignity of an innocent person, by using this person as a means for attaining a certain goal 41. On the grounds of consistent utilitarianism, the acts of causing death intentionally and unintentionally are subject to the same moral evaluation. This goes against centuries-old judiciary practice which differentiates those two cases: the perpetrators of road accidents are usually given less punitive verdicts than the perpetrators of premeditated murder. This practice can easily be justified on the basis of deontological ethics which takes into account the intentions underlying the human acts. The justification of this practice on the basis of consequentialist ethics is more difficult; there are, however, some variants of consequentialism which take into account good intentions because, as a rule, they bring more good results than bad intentions 42.

11.2.4 Moral responsibility Responsibility, in its etymological sense, is the ability or willingness to respond to the requirement of a moment or of a circumstance, to the need of another person, or to a moral imperative. We are obliged to provide such a response – in the form of an adequate action – by our status or situation, or by our past or intended actions. There are many philosophical concepts of moral responsibility, which differ primarily in the definition of the class of acts and states we are responsible for 43.

40 cf. B. Chyrowicz, O sytuacjach bez wyjścia, Wyd. Znak, Kraków 2008, pp. 333–334. 41 S. Robertson, “Reasons, values, and morality”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010. 42 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 2.1. 43 R. Clarke, “Freedom and responsibility”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010.

234

11 Basic concepts of Western ethics

According to some approaches, we are solely responsible for actions and behaviours that have external effects; according to others – also for our own beliefs, attitudes and emotional states. Responsibility is possible provided we have a certain freedom of action. In this context, the question arises whether any freedom is possible and, consequently, any moral responsibility. As a rule, we are inclined to consider our actions as free, we are even proud of our freedom – of being superior to animals driven merely by instincts. Moral responsibility is not possible on the basis of radical indeterminism, negating the existence of any cause-effect relations in the Universe. The lack of such relations precludes the possibility of conscious and deliberate influence on external circumstances. On the other hand, the question about the freedom or free will of man, being a sine qua non condition of responsibility, turns out to be no less troublesome in the context of radical determinism assuming the causal linkage of all events in the Universe – a philosophical doctrine quite popular among research practitioners. Example 11.4: The following quotation from the already cited address, delivered in 1937 by Edwin G. Conklin, is well illustrating numerous attempts to make free will compatible with causality: Freedom does not mean uncaused activity; “the will is not a little deity encapsulated in the brain”, but instead it is the sum of all those physical and psychical processes, including especially reflexes, conditionings and remembered experiences, which act as stimuli in initiating or directing behaviour. The will is not undetermined, uncaused, absolutely free, but is the result of the organization and experience of the organism, and in tum is a factor in determining behaviour. Therefore, we do not need to import from subatomic physics the uncertain principle of uncertainty in order to explain free will. The fact that man can control to a certain extent his own acts as well as phenomena outside himself requires neither a little daemon in the electron nor a big one in the man.44

If our decisions and actions are only elements of the infinite chains of causes and effects, we certainly have no choice, and therefore we are not responsible for our actions. Thus, the concept of free will seems to be consistent only with moderate indeterminism, permitting the existence of probabilistic cause-effect relationships, or with soft determinism which in ethics appears under the name of compatibilism. The British philosopher Simon Blackburn expressed the essence of this view in the following way: The subject acted freely if she could have done otherwise in the right sense. This means that she would have done otherwise if she had chosen differently and, under the impact of other true and available thoughts or considerations, she would have chosen differently. True and available thoughts and considerations are those that represent her situation accurately, and are ones that she could reasonably be expected to have taken into account.45

44 E. G. Conklin, “Science and Ethics”, 1938. 45 S. Blackburn, Think: A Compelling Introduction to Philosophy, 1999, p. 49.

11.2 Elements of metaethics and general ethics

235

The problem of free will has a long philosophical tradition, but it has recently attracted more attention due to new findings of human-brain research, aided by computer tomography, which seem to indicate that the free will is an illusion 46. Since those outcomes should not remain indifferent to the scientific community, it seems to be worthwhile to provide at least a few remarks on selected attempts to solve this problem, known from the history of philosophy. An attempt to overcome the difficulties arising from the acceptance of determinism may be attributed to the stoics (cf. Subsection 12.1.5) who believed that we are free if we recognise and accept necessity. In modern times, this problem was undertaken i.a. by the Dutch philosopher Baruch Spinoza and such Germanspeaking philosophers as Immanuel Kant, Georg W. F. Hegel (1770–1831) and Karl Marx (1818–1883). Immanuel Kant, in particular, suggested that – since morality is impossible or senseless without free will – it would be reasonable to stick to the hypothesis that people have free will, even if its existence cannot be proved, and despite convincing arguments indicating its absence 47. Following this line of reasoning, the American philosopher Thomas Nagel (*1937) argues that even if no free will can be attributed to human beings by an external observer of the world, its existence is undoubtful from the point of view of an acting person 48. On the other hand, according to the British philosopher Ted W. Honderich (*1933), human beings are part of the natural world, and everything that happens in this world is determined by earlier causes; so, there is no room for something like free will that could change the course of events in the brain or in the outside world, and consequently – it does not make any sense to burden us with moral responsibility for our actions. Is it rational? In this context, taking into account that according to the long European tradition, from Aristotle to René Descartes, the British philosopher Martin Cohen (*1964) is asking: “Not that anyone dares to say, but if there really was no such thing as free will, what then would be that difference between people and animals?” 49. In order to be able to continue the ethical considerations in the next chapters, let’s assume here, following Immanuel Kant, that free will exists, we can be morally responsible for our actions, and therefore, ethics is a rationale philosophical enterprise. There is a distinction between prospective responsibility (for doing what we are obliged to do in the near and distant future) and retrospective responsibility (for fulfilling or neglecting our duties in the near and distant past, as well as for the ongo-

46 e.g. F. M. Wuketits, Der freie Wille, S. Hirzel Verlag, Berlin 2008. 47 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 5.7. 48 ibid. 49 M. Cohen, 101 Ethical Dilemmas, Routledge, London – New York 2007 (2nd edition), p. 18.

236

11 Basic concepts of Western ethics

ing and future consequences of our action or their absence) 50. We are responsible for the intended actions and for their intended consequences 51. It seems evident that we are responsible for the immediate consequences of our actions, less evident – that we are also responsible for more remote consequences connected with those actions by causal chain of events. Where or when does our responsibility end? First and foremost, it depends on the degree of effective involvement of other moral entities in the course of these events, and on our ability to anticipate that process: the greater it is, the greater our responsibility 52. Example 11.5: A gives B money as a birthday gift. B buys, for that money, the weapon from which he shoots to C; C dies. Is A responsible for C’s death? It depends on whether A could foresee that such an accident might happen. 53

We are responsible for the good and bad consequences of our actions. In the case of morally significant consequences, following the (supposed) violation of some moral norms, we must be ready to explain the intentions and circumstances of our actions 54. Assessment of the intentions plays a key role in the moral evaluation of an act, especially if this evaluation is referring to the principle (or doctrine) of double effect, attributed to Saint Thomas Aquinas but widely accepted today not only by Christian ethics but also by secularist systems of non-consequentialist ethics 55. According to this doctrine, we are responsible for the intentional and predictable effects of our actions only, and – when making the moral evaluation of an action – we should take into account not only its purpose but also the fairness and proportionality of the means used for attaining it 56. Example 11.6: A physician who prescribes chemotherapy to a patient suffering from cancer is responsible for the patient’s hair loss. This is a foreseeable but unintended consequence of chemotherapy – negligible in comparison to the good of health recovery.

We are responsible for the foreseen and foreseeable consequences of our actions, where the potential contained in the adjective “foreseeable” requires clarification of what may be reasonably foreseen under given circumstances 57. 50 A. Vedder, “Responsibilities for Information on the Internet”, [in] The Handbook of Information and Computer Ethics (Eds. K. E. Himma, H. T. Tavani), Wiley & Sons, New Jersey 2008, pp. 339–359; S. Uniacke, “Responsibility – Intention and consequence”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010. 51 S. Uniacke, “Responsibility – Intention and consequence”, 2010 p. 596. 52 ibid., p. 599. 53 inspired by ibid., pp. 599–600. 54 ibid., pp. 600–602. 55 ibid., p. 603. 56 B. Chyrowicz, O sytuacjach bez wyjścia, 2008, pp. 308–309. 57 S. Uniacke, “Responsibility – Intention and consequence”, 2010 p. 604.

11.2 Elements of metaethics and general ethics

237

According to the traditional approaches to ethics, moral responsibility can only be individual, not collective. Today, however, this principle must be somehow integrated with the principles of evaluation and management of collective actions, such as the projects concerning degradation of natural environment, epidemic poverty in developing countries, or – what deserves special attention here – the accumulation and dissemination of knowledge 58. Modern information technologies foster, up to the previously unknown extent, the development of new forms of collective endeavours, such as computer-assisted collaborative work, virtual research teams, common databases or scientific videoconferencing. This entails new moral problems related to the responsibility for the design, implementation, maintenance and use of the common infrastructures of those undertakings – the problems that do not occur in traditional forms of cooperation 59. We say that a person is individually responsible for an action if one of the following three situations occurs: – This person has a reason or reasons for completing the action; his intention is to do (or not to do so); because of the reason or the reasons, he has completed (or refrained from completing) the action. – This person has a certain institutional role, and therefore has an institutionally defined obligation to decide what action should be taken in the subject-matter. – This person has a certain management power, and therefore has an institutionally determined obligation to decide what actions should be taken in the subject-matter and to assign the appropriate persons (or agencies) to perform them.60 A person is morally responsible for an action (or its result) if he is responsible for it in one of the above-mentioned meanings, and the action is morally significant. If a person is morally responsible for some action, then he deserves a moral reward or penalty (e.g. a praise or a reprimand) for doing so 61. By analogy, the Australian ethicist Seumas Miller (*1953) proposes to speak about the collective responsibility of a group of persons if one of the following three situations occurs: – This group has a common goal; each member of the group is going to intentionally perform (or not perform) his part of the activity, believing that the other members have the same intention; the goal of the action has (or has not) been achieved.

58 S. Miller, “Collective Responsibility and Information and Communication Technology”, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge (UK) 2008. 59 ibid., p. 238. 60 ibid., pp. 238–240. 61 ibid., p. 240.

238

11 Basic concepts of Western ethics

– This group has a certain institutional role, and is therefore institutionally obliged to collectively decide what action should be taken in the subject-matter. – This group has a certain management power, and it is therefore institutionally obliged to collectively decide what action should be taken on the subject-matter, and to appoint appropriate persons (or agencies) to do so.62 Consequently, Seumas Miller proposes to speak about the collective responsibility of a group of persons for an action (its result) if this group is responsible for the action in one of the above-mentioned meanings, and the action is morally significant. If the group is morally responsible for an action, then it deserves a moral reward or penalty (e.g. a praise or reprimand) for doing so.63 The idea of collective responsibility applies both to practical and theoretical activities. One can speak about the collective moral responsibility of a scientific team for the cognitive result of its research – for proving the truth or falsehood of a scientific hypothesis if that hypothesis is epistemically significant64.

11.3 Relation of ethics to humanities, sciences, law and religion 11.3.1 Relation of ethics to other philosophical disciplines It is important to consider the relationship of ethics to other philosophical disciplines because the ethical analyses of research practices refer, quite often, to issues being of major interest for those disciplines, especially – to the justification of the scientific method being at the heart of epistemology. Less obvious, but not less important, are the relations of research ethics with such philosophical disciplines as ontology, philosophical anthropology and axiology. Ontology, as already mentioned in Subsection 2.3.2, is the fundamental philosophical discipline dealing with the problems of being, becoming and existence; with the ways of existence of various entities and their properties; with time and space, causality, necessity and possibility. Ontological assumptions and solutions influence, sometimes in a decisive degree, the way of developing epistemology and philosophy of science; they may also imply metaethical assumptions concerning free will. Epistemology, as already mentioned in Subsection 2.3.2, is a general theory of knowledge; the fundamental philosophical discipline dealing with the origin, nature, taxonomy, validity and limits of human knowledge. Knowledge is an essential part of ethical reflection, necessary not only to fully understand the meaning of

62 ibid., pp. 240–243. 63 ibid., p. 243. 64 cf. ibid., pp. 240–244.

11.3 Relation of ethics to humanities, sciences, law and religion

239

moral norms and their justification but also to anticipate the morally significant consequences of our actions. They are, by definition, particularly important for consequentialist ethical systems, but – as already emphasised in Subsection 11.2.4 – cannot be ignored by other ethical approaches. Epistemology provides criteria for credibility or veracity of information, and therefore the criteria for distinguishing knowledge from unchecked information. Epistemology provides also a methodological basis for interpretation and justification of the scientific method, being necessary not only for the creative and rational development of sciences but also for dealing with other philosophical disciplines, including ethics. Philosophical anthropology65 is a philosophical discipline that deals with the issues concerning the human nature, the place of man in the Universe, in nature and in society. It was only at the beginning of the twentieth century that it was distinguished from the general philosophical thought by a group of German philosophers: Helmuth Plessner (1892–1985), Arnold Gehlen (1904–1976) and Max F. Scheler (1874–1928). By attempting to answer the question “What is man?”, various streams of philosophical anthropology refer, on the one hand, to different ontological and epistemological assumptions, and on the other – to diverse research findings of life sciences, sociology, psychology and humanities. The outcomes of philosophical anthropology, in turn, are referred to, in some way, by the systems of consequentialist ethics (mainly to biologically oriented streams of anthropology) and by the systems of deontological ethics (more often to the streams of anthropology oriented on humanities). Axiology66 is a philosophical discipline devoted to the study of values in general, i.e. of both moral and non-moral values. It deals with the problem of the existence of values and their nature, with the problem of sources of knowledge about the values and ways of their cognition, as well as with the problem of their functioning in society or culture. Theories of values, developed within axiology, are directly applicable in other philosophical disciplines referring to the concept of value – mainly in ethics and aesthetics; the latter being a philosophical discipline dealing with the theories of beauty, experience of beauty and artistic creation. Philosophy of law67 is a philosophical discipline concerned with the most general reflection on the law understood as a codified collection of principles, rules and regulations established in a community of citizens by the state authority. It is attempting to answer not only conceptual questions about the nature of law and normative questions about the relationship between law and morality but also quite practical questions concerning justification of various legal systems and institutions. The philosophy of law refers, on the one hand, to ontological and epistemological assumptions and, on

65 from Greek anthropos = “man” or “human being” + logos = “word” or “thought”. 66 from Greek aksios = “value” or “worth” + logos = “word” or “thought”. 67 Philosophy of law is sometimes identified with its major part called jurisprudence.

240

11 Basic concepts of Western ethics

the other, to the foundations of axiology and ethics. In particular, philosophy of law and ethics reveal common interest in the normative deliberations on the principles of (good) legislation68.

11.3.2 Relation of ethics to sciences The relation of philosophy – and therefore of philosophical disciplines – to specific sciences is the subject of a long-time controversy. Very diversified views have been expressed in this respect; they may be positioned between two extreme stances, viz.: – Philosophy is only a source of fundamental ontological and epistemological assumptions for sciences, but it does not depend on them. – Philosophy is only a synthesis and generalisation of the findings of sciences, and is therefore completely dependent on them. In light of the experience of the twentieth and twenty-first centuries, it seems to be reasonable to recognise the mutual dependence of sciences and philosophy: on the one hand, sciences depend on philosophy in terms of conceptualisation and methodology; on the other hand – philosophy depends on scientific knowledge being a source of inspiration or a basis for formulation of philosophical propositions. The conviction that scientific practice should be supplemented with elements of philosophy of science and research ethics is the raison d’être of this book. Since methodological aspects have been already covered, up to a significant extent, in previous chapters, the considerations of this section can be confined to sketching logical connections of ethics with selected sciences. There are, among them, sciences whose strict separation from descriptive ethics is impossible. Thus, the objective of moral psychology, moral sociology, moral history or ethnology of morality is to describe moral phenomena or facts with special emphasis on their specific aspects – psychological, sociological, historical or ethnic, respectively. Descriptive ethics derives from the findings of those sciences and synthesises them, thus providing a multifaceted picture of moral phenomena at different times and geographic locations, and in different social groups. The metaethical assumptions, underlying every ethical system, are derived from the knowledge about human nature. For at least 25 centuries of Western civilisation, that knowledge was the subject of philosophical speculation, and only in the twentieth century became the object of empirical research which has recently undergone extraordinary intensification due to the emergence of new research tools, such as the functional nuclear magnetic resonance (fNMR)

68 L. L. Fuller, The Morality of Law, Yale University Press, New Haven – London 1969.

11.3 Relation of ethics to humanities, sciences, law and religion

241

imaging 69. The functional studies of the human brain can, in particular, significantly contribute to explaining the complex interplay of rational and emotional components in the process of resolving moral issues 70. The related studies are of interdisciplinary nature because they are carried out in many areas of science, such as biology and chemistry, developmental psychology and evolutionary psychology, cognitive science, neurology and neurophysiology, as well as economics and sociology, game theory and decision theory 71. The first attempts to identify the biological basis of morality were made in the nineteenth century: the English philosopher and sociologist Herbert Spencer (1820–1903) tried to explain the development of morality in the process of human evolution by mechanisms of adaptation to the life in society. Already in 1937, the President of the American Association for the Advancement of Science, noticed that: “[. . .] it is a mistake to suppose that human intelligence and purpose, social sympathy, co-operation and ethics in general are not also parts of Nature and products of natural evolution”72. Since then numerous projects on evolutionary background of ethics have been completed. Example 11.7: There have been made successful attempts to identify a geographic and cultural space where such a sophisticated concept as the principle of double effect – limiting our responsibility to the intended effects of our actions – is recognised. The related research has significantly increased the probability that this principle might have some biological basis formed in the process of evolution73.

Contemporary advocates of the evolutionary explanation of morality focus on the study of altruistic behaviours which are already observed among chimpanzees74. Some of them refer to the findings of game theory, which show that in a long run a human community is more stable, better developed and profiting more from cooperation if its members follow the principle of reciprocity, i.e. respond with kindness to the manifestations of kindness and react with distrust to the absence of kindness75.

69 cf. the Wikipedia article “Functional magnetic resonance spectroscopy of the brain” available at https://en.wikipedia.org/wiki/Functional_magnetic_resonance_spectroscopy_of_the_brain [201807-14]. 70 S. Blackburn, “Ethics, science, and religion”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010. 71 ibid., p. 257. 72 E. G. Conklin, “Science and Ethics”, 1938. 73 S. Blackburn, “Ethics, science, and religion”, 2010 pp. 257–258. 74 N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, Facts On File, New York 2002, pp. 111–112. 75 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 1.8.

242

11 Basic concepts of Western ethics

Morality has also recently become a subject of interest in neuropsychology; there is even a name for the corresponding area of research: neuroethics 76. New research tools, such as already-mentioned fNMR imaging, enable the study of brain response to various stimuli, including morally “coloured” ones. Such studies result in the brain maps showing the association of various brain areas with diverse human abilities, states and activities. Of particular importance for ethics is identification of the brain areas involved in the processing of information related to morality. The links of ethics with economics are related to the natural role that morality plays in the economy. Contrary to what the neoliberal economists sometimes suggest, the Scottish philosopher Adam Smith (1723–1790) stressed the importance of morality for the correct functioning of the free-market economy. The role of morality in economic and political structures was clearly demonstrated in the 1970s by the American moral and political philosopher John B. Rawls (1921–2002) 77. Not without reason, the financial crisis that struck the world in 2008 has been identified as a moral crisis of capitalism. Finally, it should be noted that the analysis of the phenomenon of morality, based on the findings of empirical sciences, does not directly imply any moral norms. In his book entitled A Treatise of Human Nature (1739), David Hume defined the error of argumentation, called naturalistic fallacy, which consists in trying to derive normative statements from the knowledge of facts78. This may be done in a logically correct way, if a set of facts-related premises of reasoning is complemented with at least one normative premise addressing a value or an obligation (cf. Subsection 2.2.8).

11.3.3 Relation of ethics to law and religion Legal norms are in many respects similar to ethical norms. Both refer to such concepts as responsibility, duty, negligence, rights, benefits or harm; similar are also the ways of reasoning used in both areas: argumentation and counter-argumentation, analysis of concepts and principles, discussion of cases and examples. But there are also fundamental differences: – The law is a set of regulations which are established or recognised by the state and secured by the state means of coercion – including such forms of punishment as imprisonment, fine or expropriation – to be implemented if those regulations are violated, while the compliance with moral norms (which

76 cf., for example: J. J. Giordano, B. Gordijn (Eds.), Scientific and Philosophical Perspectives in Neuroethics, Cambridge University Press, Cambridge (UK) 2010. 77 cf. M. Cohen, 101 Ethical Dilemmas, 2007 (2nd edition), p. 307. 78 cf. the article “Naturalistic fallacy” in Online Lexicon of Arguments available at https://www.phi losophy-science-humanities-controversies.com/index.php [2018-06-02].

11.3 Relation of ethics to humanities, sciences, law and religion

243

are not supported by the law in force) can only be induced by the person’s conscience or the pressure of public opinion. – There may be only one set of legal regulations in force on the territory of a state while its citizens may follow very diversified moral convictions and advocate different ethical systems. – The law may regulate also non-moral issues of formal or administrative nature, which are of little interest for ethics. – The law is local, i.e. it applies to the citizens of a state which has introduced it; international regulations are valid provided the state has ratified the corresponding international conventions. Many ethical systems are derived from religions, but they should not be identified with those religions, nor should those religions be reduced to these systems. A religion79 is something more than ethics: it is a socio-cultural phenomenon developed around a relationship between a human being and a deity or a realm of the holy, including a doctrine, religiously motivated morality and a system of rituals, as well as cult institutions. The religiously motivated morality takes on the forms determined by general conditions of cultural environment: from the taboo (i.e. the system of social prescriptions and proscriptions regulating the life of a community) in primitive religions, through the concept of dharma and karmana (i.e. the laws of existence, duty and absolute causality) in religions such as Hinduism, to the understanding of morality as the decision-based realisation of human good in such religions as Christianity. Primitive religions, even if they generate moral norms, as a rule do not develop ethical theories justifying those norms and making them consistent.

79 from Latin religare = “to bind fast”.

12 Western ethics in historical perspective The purpose of an overview of the basic approaches to ethics, presented in this chapter, is to show various ways of thinking about morality, which have emerged in Western culture over its centuries-long history, which have not lost their argumentative value, which are still applicable in building ethical justifications, and therefore provide “prototypes” of argumentation useful in contemporary ethical debates. This overview can be also treated as an ostensive extension of the lexical definition of ethics – an answer to the question how it has been understood during 25 centuries of Western philosophical tradition. Since this overview is intended to serve a very specific educational purpose, it cannot replace the multi-volume handbooks of ethics and history of ethics1 or specialised multi-volume encyclopaedias 2. The concepts of general ethics, to be outlined here, have been selected according to the frequency they are referenced to in publications on research ethics. Ethics is a philosophical discipline, and is therefore a product of intellectual sublimation and formalisation of the life wisdom accumulated over centuries in various cultural and religious traditions. Unlike scientific and technical achievements of our ancient and medieval ancestors – outperformed by the scientific and technical progress made in modern times, especially in the nineteenth and twentieth centuries – the ethical considerations of the past still remain relevant and important because the evolution of human nature and interpersonal relations is incomparably slower than the evolution of science and technology. This is a motivation for referring to those considerations in the twenty-first century. According to the author’s teaching experience, the chronological order of their presentation is more effective than the problematic order because it is better fit to the perceptual capabilities of today’s students formed by the internet and social media rather than by logical and philosophical training.

12.1 Ancient and medieval concepts of ethics 12.1.1 Ancient code-based ethics Historical sources of European culture include the ancient cultures of the Middle East; this also applies to morality and ethics. Our knowledge about moral convictions of 1 e.g. D. Copp (Ed.), The Oxford Handbook of Ethical Theory, Oxford University Press, Oxford 2007; R. Crisp (Ed.), The Oxford Handbook of the History of Ethics, Oxford University Press, Oxford 2013; H. LaFollette (Ed.), The Oxford Handbook of Practical Ethics, Oxford University Press, Oxford (UK) 2005. 2 e.g. L. C. Becker, C. B. Becker (Eds.), Encyclopedia of Ethics, Vol. 1–3, Routledge, Abingdon (UK) 2001 (2nd edition); H. LaFollette, The International Encyclopedia of Ethics, Vol. 1–9, Wiley & Sons, Wiley Online Library 2013–2017. https://doi.org/10.1515/9783110584066-012

246

12 Western ethics in historical perspective

ancient Egyptians or Mesopotamians comes from the codes of laws in which ethical issues emerge in the context of religious, legal, social and economic issues. Probably the best known among the oldest documents of this kind is the Hammurabi’s Code of Laws, compiled in the eighteenth century BC during the reign of Hammurabi, the king of Babylon. It is most frequently referred to for its retaliation principle which is laid down in its two paragraphs (196 and 197): “If a man put out the eye of another man, his eye shall be put out. If he break another man’s bone, his bone shall be broken”3. There is, however, another ancient code of laws whose impact on the development of Western ethics has been much more important than that of the Hammurabi’s Code of Laws. This is the Decalogue 4, a set of ten biblical commandments relating to morality and worship, which appeared in the sixteenth century BC, for millennia has played a fundamental role in Judaism, Christianity and Islam, and still remains an inexhaustible source of moral inspiration, not only for the followers of those religions 5.

12.1.2 Ethical intellectualism of Socrates The Athenian philosopher Socrates (469–399 BC) is considered to be the father of European ethical tradition. He did not write any philosophical text; his views are known only from the writings of his disciples Xenophon (450–355 BC) and Plato, and from the plays of the playwright Aristophanes (448–380 BC). The Socrates’ ethical teaching is usually referred to as ethical intellectualism. It can be summarised as follows: – Moral knowledge is potentially present in every person, but not everyone is aware of it; so, the task of teachers is to help pupils to discover it. – The purpose of life is neither in material benefits nor in happiness, but in a virtuous life (which may bring both material benefits and happiness). – No one errs or does wrong willingly or knowingly – evil deeds result from ignorance; knowledge is a sufficient condition of developing virtues, including the virtue of wisdom being the remedy for human evil. According to the ancient Greek thinkers, wisdom was exclusively attainable for the gods, and man could only search for it; that’s why the intellectual tool of this

3 Hammurabi’s Code of Laws, 2008 (translated by L. W. King), http://eawc.evansville.edu/anthol ogy/hammurabi.htm [2017-11-25]. 4 The Holy Bible, Catholic Public Domain Version 2010 (translated by R. L. Conte), http://soundbi ble.com/book/holy-bible-pdf-download.pdf [2017-11-24], Deuteronomy 5: 4–22. 5 e.g. Y. Amar, Les Dix Commandements interiéurs, Éditions Albin Michel, Paris 2004; A. Deissler, Ich bin dein Gott, der dich befreit hat: Wege zur Meditation über das Zehngebot, Herder, Freiburg im Breisgau – Basel – Wien 1980; T. Meissner, Moses, hol die Tafeln ab!, Kreuz Verlag, Stuttgart 1993.

12.1 Ancient and medieval concepts of ethics

247

search was called philosophy 6. The concept of wisdom in its philosophical sense means the ability to think and act using knowledge, experience, understanding, common sense and insight. This is a dictionary-type definition which, as a rule, requires further elaboration including exemplifications and differentiation from some neighbour concepts such as intelligence, cleverness or cuteness 7. For the purpose of this book, it is enough to note that: – The general wisdom is strongly correlated with the ability to distinguish between facts and illusions, truth and falsehood, good and evil, beauty and ugliness. – The common wisdom is related to knowing the art of life, to the understanding of the world and people, as well as to the ability to set goals, to predict, and to judge.

12.1.3 Eudaimonism of Plato Eudaimonism8 (alternatively spelled as eudaemonism or eudemonism) is an ethical view, according to which the ultimate goal and the highest good of man is the state of eudaimonia – the state of wellbeing, happiness, excellence and prosperity 9. Divers variants of this view appeared in the ethical thought of many Greek and then Christian philosophers. The Plato’s variant is called agathic eudaimonism10 because it advocates the understanding of eudaimonia as a state of communion with the absolute idea of good and beauty. In the pursuit of this state, man is driven by the interplay of three parts (or functionalities) of the soul: rational, emotional and appetitive. Virtues related to those parts (wisdom, courage and temperance, respectively), coordinated by a super-virtue (justice), provide the equilibrium of the soul: the rational part is governing the emotional and appetitive part, thereby correctly leading all desires and actions towards eudaimonia.

6 from Greek philos = “loving” + sophia = “wisdom”. 7 cf. H. Marchand, “An Overview of the Psychology of Wisdom”, 2002, http://www.wisdompage. com/AnOverviewOfThePsychologyOfWisdom.html [2016-08-19]; P. Lewis, “Wisdom as Seen Through Scientific Lenses: A Selective Survey of Research in Psychology and the Neurosciences”, Tradition & Discovery: The Polanyi Society Periodical, 2009–2010, Vol. 36, No. 2, [2016-08-19]; U. M. Staudinger, J. Glueck, “Psychological wisdom research: Commonalities and differences in a growing field”, Annual Review of Psychology, 2011, Vol. 62, pp. 215–241; D. Narvaez, “Wisdom as mature moral functioning: Insights from developmental psychology and neurobiology”, [in] Character, Practical Wisdom and Professional Formation across the Disciplines (Eds. M. Jones, P. Lewis, K. Reffitt), Mercer University Press, Macon (USA) 2013. 8 from Greek eu = “good” + daimon = “guardian spirit”. 9 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 1.10. 10 from Greek agathos = “gentle”.

248

12 Western ethics in historical perspective

Good is, according to Plato, a memory of the perfect world; evil – a consequence of immersion in sensual reality and oblivion of the world of ideas. In the dualistic split of man into the soul and body, Plato attributes more importance to the soul, and therefore considers purification of the soul through the intellectual pursuit of truth to be the main moral task of man. In a sense, he equates the process of rational cognition and moral progress.

12.1.4 Virtue ethics of Aristotle The approach to ethics, proposed by Aristotle being the most outstanding student of Plato and the greatest among ancient Greek philosophers, may be called perfectionist eudaimonism11 because he understood eudaimonia as the state of human perfection. Three paths may lead, according to his view, to this state: contemplative life, active (practical) life or responsible love (family attachment, love between man and woman, friendship or worship of deity). The indulgence in pleasures is a way of life suitable for slaves and cattle, not for a free human being; the pursuit of material wealth is only a means to an end, not an end in itself; and the virtuous activity of the soul – the highest good of man. Aristotle opposed to the ethical intellectualism of Socrates; unlike Plato, he recognised the existence of material goods only and based his ethics on realistic assumptions concerning the human capabilities and needs. He defined virtue as a golden mean between excess and deficiency; indicated moderation and consideration as common features of all virtues. He distinguished two types of virtues: intellectual virtues (such as wisdom or ability to understand) and moral virtues (such as courage or generosity); the highest appreciation attributed to reason or prudence, understood as a permanent disposition to base actions on the accurate recognition of what is good or what is evil for man. The most complete presentation of Aristotle’s ethical views may be found in his posthumous work Nicomachean Ethics 12, whose title gave origin to the usage of the term ethics for any system of views on morality. The books I–IV of this work contain a systematic presentation of the Aristotelian virtue ethics, including the list of the following 12 virtues: – courage understood as the mean between cowardice and rashness; – temperance understood as the mean between insensibility and over-indulgence; – generosity understood as the mean between stinginess and extravagance; – magnanimity understood as the mean between pettiness and vulgarity;

11 from Latin perfectio = “perfection”. 12 Aristotle, Nicomachean Ethics, Batoche Books, Kitchener 1999 (translated from Greek by W. D. Ross), http://www.efm.bris.ac.uk/het/aristotle/ethics.pdf [2017-11-30].

12.1 Ancient and medieval concepts of ethics

249

– self-confidence understood as the mean between timidity and conceit; – proper ambition understood as the mean between under-ambition and overambition; – good temper understood as the mean between impassivity and ill temper; – truthfulness understood as the mean between false modesty and boastfulness; – wittiness understood as the mean between humourlessness and buffoonery; – friendliness understood as the mean between unfriendliness and flattery; – proper shame understood as the mean between shamelessness and excessive shame; – righteous indignation understood as the mean between malice and envy. 13

12.1.5 Moralism of stoics Stoicism14 was a philosophical programme initiated in the third century BC in Athens by the Hellenistic thinker Zeno of Citium (333–262 BC) – the programme which continued to develop, first in Greece and then in Rome, almost until the sixth century when, by a decision of the Byzantine (East-Roman) emperor Justinian the Great (482–565), all pagan philosophical schools were closed. The most wellknown Roman representatives of stoicism were Lucius Annaeus Seneca (4 BC to 65) – a dignitary at the court of the Roman emperor Nero, and Marcus Aurelius (121–180) – the Roman emperor. Ethical views of stoics were based on the conviction that opposing the laws of nature is unreasonable because it cannot change the course of events in a long run, but it can become a source of suffering. One should live in harmony with nature, guided by reason rather than by passions and emotions. Virtue is the only good to be sought for; everything else is indifferent or evil. Virtuous life consists in following reason and controlling emotions; wisdom is the most important among all virtues. Happiness does not depend on external factors, but on internal discipline; freedom consists in understanding and acknowledging necessity. An act is good if its perpetrator is motivated by good intentions; a virtuous man diligently completes his familial and social duties which fall on him in the natural course of things, but does not look for additional activities and incentives; he accepts material goods and power as long as they appear in a natural way, but does not fight for them.

13 a more modern translation from: J. Fieser, “Virtues”, [in] Lecture notes for the course of philosophy (Phil 300), The University of Tennessee, Martin 2017, https://www.utm.edu/staff/jfieser/class/ 300/virtues.htm [2017-11-30]. 14 from Greek stoa = “portico”, i.e. “a covered place for walks and meetings”.

250

12 Western ethics in historical perspective

The works of stoics – especially Moral Essays15 and Epistulae Morales16 by Lucius Annaeus Seneca, and Mediations17 by Marcus Aurelius – have been widely read up to now not only by professional philosophers but also by philosophically sensitive intellectuals in a broader sense. Recently, they have been frequently recommended by psychotherapists to the patients suffering stress-induced states of psychic disequilibrium.

12.1.6 Christian ethics Christian ethics has its basis in the Bible, in particular, in the Decalogue, recorded in the Old Testament, and in the commandment of love, recorded in the New Testament 18. The ethical principles, formulated in the Bible, have been developed, systematised and substantiated by the great Christian philosophers and theologians, first of all by Saint Augustine of Hippo (354–430) and by Saint Thomas Aquinas. According to Christian ethics, the first duty of man is to practice unconditional love for God. The consequence of this practice is love for a human being as a creature endowed with a special dignity – love understood as a life attitude characterised by striving to ensure good for a neighbour in the same degree as for oneself. Every human act should be preceded by reflection over the means to achieve this good, and by the approval of the choice resulting from that reflection. As a consequence of the repeated choices following this pattern, moral and intellectual skills (virtues) are developed. Among them, particular importance is attributed to prudence understood as practical wisdom and efficiency in deciding what is best to be done in given circumstances, as well as in solving specific moral problems. The canon of Christian virtues includes three theological virtues and four cardinal virtues. The theological virtues are: – faith being a virtue by which one’s intellect is perfected by a supernatural light and assents firmly to the supernatural truths of Revelation, not because of intrinsic evidence, but on the sole ground of the infallible authority of God; – hope being a virtue by which one trusts with an unshaken confidence grounded in the God’s assistance to attain eternal life;

15 L. A. Seneca, Moral Essays, Harvard University Press, Cambridge (USA) 1928 (translated from Latin by J. W. Basore), https://ryanfb.github.io/loebolus-data/L214.pdf [2017-12-27]. 16 L. A. Seneca, Epistulae Morales (Letters), Harvard University Press, London 1917 (translated from Latin by R. M. Gummere), https://ryanfb.github.io/loebolus-data/L075.pdf [2017-12-27]. 17 M. Aurelius, Meditations, Modern Library, New York 2002 (translated from Greek by G. Hays), http://seinfeld.co/library/meditations.pdf [2017-12-27]. 18 The Holy Bible, Catholic Public Domain Version 2010; Matthew 22, 37–40; Luke 6, 27–38; Mark 12, 28–34.

12.1 Ancient and medieval concepts of ethics

251

– love being a virtue by which God is loved by reason of His own intrinsic goodness, and one’s neighbour is loved on account of God. The cardinal virtues include already defined prudence and: – justice being a virtue which disposes one to respect the rights of others, to give each man his due; – temperance being a virtue which moderates, in accordance with reason, one’s desires and pleasures of the sensuous appetite; – fortitude (or courage) being a virtue by which one sustains dangers and difficulties (even death) and is able to pursue a good which reason dictates (despite those dangers and difficulties). The canon of Christian virtues includes, moreover, seven heavenly virtues: humility, chastity, kindness, patience, temperance, diligence and charity. This list of 14 virtues is a synthetic representation of what is appreciated, valued and fostered in the systems of Western ethics – both of religious and secular provenience. In the latter case, of course, theological virtues are reinterpreted in terms of secular spirituality.

12.1.7 Ethics of natural law The tradition of thinking in terms of natural law goes back to the times of Aristotle and stoics. According to this tradition, there are naturally good things (life, health, happiness, pleasure) and naturally bad things (death, illness, suffering, pain). Our main moral duty is to take actions which foster the proliferation of good things and the disappearance of bad things. Saint Thomas Aquinas was one of the most influential protagonists of theological approach to natural law. According to him, the human intellect learns this law by recognising the world order established by God. Natural law and the ethical norms implied by natural law are accessible to every human being who analyses human nature and listens to the voice of conscience. Because of the assumption of invariability of human nature, genetic engineering has become a serious challenge for the contemporary theorists of natural law since it may imply serious violation of the eternal order of nature. 19 Natural law enables people to distinguish good from evil, to discover their fundamental right to life protection and to the respect for life. It enables them, moreover, to recognise the right to life transmission, the right to intellectual development, the right to learning the truth and the right to their participation in culture. Any positive

19 K. E. Himma, “Natural Law”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/natlaw/ [2018-06-05].

252

12 Western ethics in historical perspective

law (i.e. the law established by people) is legitimate if it is derived from or is complementing natural law; it does not bind conscience, but it should be respected if it does not contradict the natural law. 20

12.2 Modern concepts of ethics 12.2.1 Ethics of human rights The concept of human rights appeared in the seventeenth century as a generalisation of the idea of property rights; its first philosophical justification was proposed by the English philosopher John Locke (1632–1704). According to this concept, all people have certain fundamental rights, such as the right to life, the right to personal freedom, the right to possession of material things, as well as the right to freedom of speech, thought and confession. These rights are “natural” in the sense that they are unconditional. Society, however, can limit the scope of the rights of an individual to protect other individuals from the negative consequences of his actions; it is, however, controversial to limit the individual’s rights for his own good .21 Two categories of human rights can be distinguished: negative rights and positive rights. The first of them relate to acting, thinking and speaking in a way that is free from interference by others, in particular, from interference by state authorities. This category, in addition to the right to freedom already mentioned, includes the right to privacy, which may be the basis for refusing to provide certain information to other people or authorities. Positive rights entitle an individual to require specific goods or actions from the state organs or other people. This category includes, for example, the right of children to obtain shelter, support and education from their parents. 22 The concept of human rights seems to dominate the moral discourse underway in modern democracies, which is largely a consequence of political and social movements in favour of those rights – the movements which started to develop after 1945. This situation does not mean, however, that the philosophical foundations of this ethical concept are undisputable .23

20 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 1.15. 21 A. Fagan, “Human rights”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/hum-rts/ [2018-06-05]. 22 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 1.19. 23 T. Campbell, “Rights”, 2010 p. 669.

12.2 Modern concepts of ethics

253

12.2.2 Moral sentimentalism of David Hume The eighteenth-century Scottish philosophers of morality – Francis Hutcheson (1694–1746), David Hume and Adam Smith – rejected the dominant view that morality is founded exclusively on reason. They claimed that reason may contribute to our moral life only to the extent that it affects feelings, determines the means for acquiring the objects of our desires, and indicates the likely consequences of our actions 24. The greatest influence on the further development of ethics as a philosophical discipline exerted the David Hume’s approach called moral sentimentalism. It can be summarised as follows: – Human actions are, as a rule, motivated not only by self-interest but also by empathy 25. – In general, the moral judgment of those actions is a result of a subtle interplay of rational reasoning and emotions. – The moral approval for an action is most often due to its social utility. David Hume noted that from a piece of knowledge about facts (knowledge of “how things are”) one cannot derive, in a logically correct manner, ethical recommendations (i.e. to say “how those things should be”). This becomes possible when at least one obligation-type premise is added to a set of premises used for inference. In everyday moral reflection, in a tacit manner, we attach such a premise, guided by feelings, preferences or superstitions.

12.2.3 Formal ethics of Immanuel Kant The eighteenth-century Prussian philosopher Immanuel Kant is considered, at least by some German authors, to be the most influential ethicist in the Western philosophical tradition. He proposed a system of deontological ethics in its purest form – the system based on the following assumptions: there exists a universal moral law; human beings can learn it by means of practical reason; when knowing it, they should follow it unconditionally. In his 1785 treaty Grounding of the Metaphysics of Morals 26, Immanuel Kant attempted to grasp and express

24 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 1.20. 25 D. Hume used the word “sympathy” in its eighteenth-century meaning being close to the today’s meaning of the word “empathy”. 26 the original title: Grundlegung zur Metaphysik der Sitten.

254

12 Western ethics in historical perspective

the moral law using the concept of categorical imperative27 understood as a commandment to be followed unconditionally, i.e. in any situation, under any circumstances. He proposed its several formulations; here are two of them, most frequently referred to in philosophical literature: – “Act only according to that maxim whereby you can at the same time will that it should become a universal law.” – “Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.” According to Immanuel Kant, the categorical imperative plays in ethics the same role as the laws of logic in science do; it must be, therefore, universal and unavoidable. It is the principle of universalisation which states that if something is morally permitted for a single person in certain circumstances, it must be permitted for any person in analogous circumstances. This is an unconditional obligation and a golden standard for testing all other moral obligations. According to Immanuel Kant, only an act which is dictated by an absolutely selfless free will deserves a positive moral appraisal; an act which is motivated by fear of punishment or condemnation of people or of God does not; neither an act which is motivated by a desire of pleasure or happiness, nor an act implied by mercy or compassion. To perform under pressure of desires means to be a slave of desires. Only the behaviour compliant with the reason itself makes us autonomous and self-governing creatures which establish a moral law for themselves. Only a person of good will – a person guided by the will to act in accordance with the duty, i.e. with a categorical imperative – deserves happiness. 28 Example 12.1: According to Immanuel Kant, the only thing that is good without reservation is the good will. Neither material assets, nor intellectual qualities are unconditionally good; even health may be bad or undesirable under some circumstances. It is not difficult to imagine that the deterioration of the Stalin’s health condition in 1938 could decisively influence the course of the beginning of the World War II, as the deterioration of the Franklin D. Roosevelt’s health state in 1945 influenced the political developments by the end of that war.

The formal ethics of Immanuel Kant is the ethics of duty: the unconditionality of morality, its unselfishness follows from the fulfilment of duty. The very matter of an act and its consequences have no moral significance.

27 from Greek kategorikos = “affirmative” + Latin imperativus = “pertaining to a command”. 28 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 1.2.

12.2 Modern concepts of ethics

255

Example 12.2: The above statement may be well illustrated with the Immanuel Kant’s attitude towards lying. In his 1797 short essay “On a supposed right to lie because of philanthropic concerns”29, he considers what we should do when a murderer, pursuing our friend, is asking about his whereabouts: should we say the truth, or should we rather lie to save his life. The Immanuel Kant’s position leaves no doubt: truthfulness is our formal duty with respect to everyone, and we cannot avoid it even in such a situation. His conclusion is clear: “[. . .] a lie always harms another; if not some other human being, then it nevertheless does harm to humanity in general, inasmuch as it vitiates the very source of right.”

12.2.4 Utilitarian ethics The English lawyer and philosopher Jeremy Bentham (1748–1831) associated the David Hume’s remark that the moral approval of an action is closely related to its social utility with the observation that “Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. They alone point out what we ought to do and determine what we shall do; the standard of right and wrong, and the chain of causes and effects, are both fastened to their throne. They govern us in all we do, all we say, all we think [. . .]” 30. Jeremy Bentham argued that the value of things or actions is determined by the amount of pleasure they provide, regardless of its type. He distinguished seven aspects or dimensions of pleasure: intensity, duration, certainty of achieving it, nearness, ability to induce new pleasures, purity (understood as the lack of admixture of unpleasant sensations) and its extent. He proposed to base moral decisions on a “calculus of pleasure” 31, consisting in summation of pleasures according to those dimensions. One should choose, he postulated, such things or actions that maximise pleasure and minimise pain, or, more precisely, maximise the balance of pleasure and pain. When applied to society, he suggested, the calculus should follow the principle of equal consideration of interests – “everybody to count for one, nobody for more than one”32 – in order to maintain a balance between egoistic and altruistic behaviours. According to this principle, we must neither favour our family or friends, nor count us more than others: we should give up our own pleasure if the outcome of the pleasure calculation speaks for the choice of someone else 33.

29 the original title: “Über ein vermeintes Recht, aus Menschenliebe zu lügen”. 30 J. Bentham, An Introduction to the Principles of Morals and Legislation, Jonathan Bennett 2017 (first published in 1789), http://www.earlymoderntexts.com/assets/pdfs/bentham1780.pdf [2017-12-09], Section 1.1. 31 also called hedonistic calculus or felicific calculus. 32 the statement attributed to Jeremy Bentham by John S. Mill. 33 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 4.7.

256

12 Western ethics in historical perspective

The ethical concept of Jeremy Bentham is today called hedonistic utilitarianism 34, although the term utilitarianism was introduced only by the English philosopher and economist John S. Mill – the author of the principle of utility, according to which one should strive to maximise the happiness of as many people as possible – through reduction of pain and augmentation of pleasure. When developing the ethical approach of Jeremy Bentham, he distinguished lower forms of pleasure (culinary, sexual, etc.) and higher forms of pleasure (intellectual, aesthetic, etc.). He argued that the latter are superior because they are always preferred by those who have experienced both types of pleasure. He could not, however, avoid the confrontation with a simple fact of life that many people are more willing to enjoy the lower forms of pleasure, which means that the higher forms do not necessarily provide more pleasure. So, there must be some reasons other than the pleasure itself, e.g. the beneficial influence of higher forms of pleasure on personal development for which people choose them. This would, however, contradict the fundamental thesis of utilitarianism about the primacy of pleasure .35 The term utilitarianism refers today to many consequentialist approaches to ethics, developed over the last 200 years. At the beginning of the twentieth century, the English philosopher George E. Moore initiated the pluralistic utilitarianism, defining utility in terms of whatever has intrinsic value (knowledge, love, friendship, health, beauty, etc.), not just in terms of pleasure and pain or happiness. There have also been developed numerous variants of hedonistic utilitarianism, differing in the definition of a target group of people (family, nation, humanity) whose happiness is subject to maximisation, as well as variants of utilitarianism of preferences, recognising – as the main criterion for evaluation of human acts – the number of the members of that group who can live according to their preferences 36. Apart from the classical act utilitarianism – which in every situation makes us reevaluate the optional actions and choose the action that brings the best balance of good and evil – the rule utilitarianism has been developed, according to which moral decisions should be made based on a set of rules that have been validated by means of the utility criterion 37. The rule utilitarianism, being a hybrid of consequentialist and deontological approaches to ethics, may justify decisions other than those justifiable, ceteris paribus, on the grounds of classical utilitarianism.

34 from Greek hedone = “pleasure” + Latin utilitas = “usefulness”. 35 J. Baggini, P. S. Fosl, The Philosopher’s Toolkit: A Compendium of Philosophical Concepts and Methods, 2010, Section 3.14. 36 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 2.1. 37 S. Nathanson, “Act and Rule Utilitarianism”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/util-a-r/ [2018-06-06].

12.2 Modern concepts of ethics

257

Example 12.3: Let’s assume that A has no money and is hungry, while his colleague at work, B, has 245 EUR in the wallet. Stealing 5 EUR by A from the B’s wallet for buying a lunch is justifiable on the grounds of the act utilitarianism, because the loss of 5 EUR is negligible for B, and the gain of 5 EUR for A is very important. It would be, however, questionable from the point of view of the rule utilitarianism because the general prohibition of theft is increasing long-term utility or happiness in society.

All versions of utilitarianism refer to empirical evaluation of likely effects of actions concerning a group of people – the effects such as the total amount of happiness, well-being, preferences or contentment. The fundamental difficulty of such evaluation is related to the incommensurability of those effects and to the risk of violating the rights of individuals in the name of collective good 38.

12.2.5 Material ethics of value The nineteenth-century critique of the Immanuel Kant’s formal ethics39 and of utilitarianism gave rise to the material ethics of value. It was initiated by the German philosopher Max F. Scheler who believed that moral values exist independently of us and timelessly. Despite the fact that they can be known only through intuition, our knowledge about them can be quite precise and objective. According to Max F. Scheler, the values may be ranked as follows: – values of the holy (i.e. religious values); – values of the mind (e.g. truth, beauty, justice); – values of vitality (e.g. health, energy, fitness); – values of pleasure (e.g. pleasure of sex, pleasure of eating, pleasure of reading); – values of utility (e.g. money, real estate, information) .40 The values of lower rank serve those of higher rank: values of utility make possible the realisation of values of pleasure – the latter the realisation of values of vitality, etc.; thus, the values of higher rank give meaning to the values of lower rank. Human behaviour should be guided by the hierarchy of values: when confronted with the situation of moral choice, one should choose an act that embodies a value of higher rank rather than an act that embodies a value of lower

38 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 2.1. 39 cf. for example, R. Perrin, Max Scheler’s Concept of the Person: An Ethics of Humanism, Palgrave Macmillan, New York 1991, Chapters 1–3. 40 ibid., Chapter 4.

258

12 Western ethics in historical perspective

rank. A “disorder of the heart” occurs whenever a person prefers a value of lower rank to a value of higher rank, or a disvalue to a value. According to Max F. Scheler, human life should be guided by the experience of values rather than by ethical imperatives. Therefore, the main task of ethics is to study the realm of values rather than to formulate moral interdictions and obligations for people. By experiencing values, we undertake the realisation of values in accordance with their hierarchy. However, on our way towards objective cognition of values, we are threatened by many delusions, the most dangerous resulting from resentment; the opposite of resentment is the attitude of love and magnanimity, which allows us to reach the truth of the realm of values. The key concepts of material ethics of value were also referred to by several Catholic philosophers of the twentieth century, i.a. Dietrich R. A. von Hildebrand (1889–1977), Karol J. Wojtyła41 (1920–2005) and Józef Tischner (1931–2000).

12.2.6 Ethics of discourse The ethics of discourse was prefigured by the Jewish philosopher of religion Martin Buber (1878–1965) who focused ethical considerations on a meeting of two partners entering into a good-will dialogue, and stated that the conditions of an honest dialogue are equivalent to those of harmonious coexistence of people of good will 42. Two contemporary German philosophers, Karl-Otto Apel (1922–2017) and Jürgen Habermas (*1929), assuming the Immanuel Kant’s formal ethics as a starting point, made an attempt to justify moral norms using the conceptual basis of communication theory. When agreeing with Immanuel Kant on the conviction that the reason is a sine qua non condition of impartiality, they pointed out that his approach does not provide sufficient guidance to demonstrate how impartiality should be implemented in the process of rational communication with others 43. Only those moral norms should be considered valid, they postulated, which are approved (or could be approved) by all participants of the discourse; each participant, however, should be able to conceive rational argumentation, to recognise rational argumentation of other participants, and to renounce violence as a means for imposing preferred conclusions 44. According to Jürgen Habermas, every person able to speak and act is allowed to take part in a discourse, to question any assertion whatever, to introduce any assertion whatever into the discourse and to express his attitudes, desires and

41 better known as Pope John Paul II. 42 S. Scott, “Martin Buber (1878–1965)”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/buber/. 43 J. Baggini, P. S. Fosl, The ethics toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 2.5. 44 ibid.

12.2 Modern concepts of ethics

259

needs; moreover, no participant of the discourse may be prevented, by internal or external coercion, from exercising those rights 45. According to the advocates of the ethics of discourse, the discourse has a potential to overcome the conflict between deontological and consequentialist approaches to ethics. We are responsible, they state, both for keeping moral standards and for the effects of our actions, including those undertaken with such a noble purpose as the implementation of moral norms.

12.2.7 Ethics of justice The American political philosopher John B. Rawls, starting from the criticism of utilitarianism as a concept that does not correspond to the aspirations of a democratic society, and referring to the formal ethics of Immanuel Kant, proposed two principles of justice, which could contribute to the design and development of a political system reconciling the claims implied by two fundamental values: freedom and equality. The readers of his 1971 book A Theory of Justice, however, quickly realised that those principles could be used as a basis for solving many non-political, but morally significant, interpersonal problems. According to John B. Rawls, those principles should be adopted and implemented by free and equal persons behind the “veil of ignorance” i.e. in a situation where they did not know what sex, age, etc. would they have, and what position would they take in a future society 46. Example 12.4: The prototypes of the idea of the veil of ignorance may be found in everyday practice of just distribution of goods. One of them is a rule according to which – when sharing a cake – two persons, A and B, should proceed as follows: A is cutting the cake into two pieces, and B is choosing one of them. Under an assumption that both A and B are interested in maximising their shares, this procedure is tending to guarantee equal shares to both of them.

The first principle of justice, postulated by John B. Rawls, is the principle of liberty which requires equal basic liberties for all citizens: freedoms of conscience, association and expression, democratic rights, personal property right, etc. According to this principle, every person should have an equal right to those liberties in so far as to not collide with the analogous right of others. Thus, this principle – an example of applying the categorical imperative in the political sphere47 – implies the necessity of imposing some restrictions on basic liberties so that everyone can exercise them equally.

45 J. Habermas, Moralbewußtsein und kommunikatives Handeln, Suhrkamp, Frankfurt am Main 1983. 46 cf. N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, 2002, pp. 268–269. 47 ibid.

260

12 Western ethics in historical perspective

The second principle of justice, postulated by John B. Rawls, is the principle of fair equality of opportunities which requires that persons with comparable talents and motivation have similar life chances. According to this principle, social and economic inequalities are justified if and only if at the same time they bring the greatest benefit to the least advantaged members of the society, and support equal opportunities of access to offices and positions 48. When justifying the latter conditions, John B. Rawls argued that factors which are independent of us (e.g. the low economic status of our parents) should not constrain our life chances; that – because our innate talents are not down to our efforts – the profits they will probably bring to us will be undeserved. Authentic equality of opportunities means that offices and positions are granted exclusively on the basis of the substantial qualifications of candidates, while each of them has equal opportunities of acquiring such qualifications. 49 John B. Rawls introduced the concept of primary goods defined as those goods which are useful and therefore desirable for every human being. Their existence is the common base for the spontaneous acceptance of the principles of justice. Primary goods may be subdivided in two categories: natural primary goods, such as intelligence, imagination or health; and social primary goods, such as civil and political rights, liberties, income or wealth. According to John B. Rawls, a primary good is referred to by every rational concept of life, and therefore, it is rational to strive to have a primary good, and to prefer its larger rather than smaller amount. These are the distinctive features of primary goods which enable us to differentiate them from secondary goods50. Ethics of justice is most frequently associated with jurisprudence, sociology and political sciences; it may seem at first glance to be of secondary significance for research methodology. This can be a false impression since it turns out to be important for organisation and financing of research, as well as for research practice with involvement of human subjects. Example 12.5: Chapter 6 of the European Textbook on Ethics in Research 51, entitled “Justice in research”, is devoted to two broad categories of justice-related issues which appear in biomedical studies: – concerns about researchers unfairly taking advantage of research subjects and imposing unfair burdens on them for the sake of benefits to themselves or others; – concerns about unfair exclusion of particular groups from participation in research and the benefits that may be attached to research participation.

48 J. van den Hoven, E. Rooksby, “Distributive Justice and the Value of Information – A (Broadly) Rawlsian Approach”, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge 2008. 49 cf. N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, 2002, pp. 268–269. 50 J. van den Hoven, E. Rooksby, “Distributive Justice and the Value of Information – A (Broadly) Rawlsian Approach”, 2008 pp. 379–383. 51 European Textbook on Ethics in Research – EUR 24452 EN, EC Directorate-General for Research Science, Economy and Society, Brussels 2010, https://ec.europa.eu/research/science-society/docu ment_library/pdf_06/textbook-on-ethics-report_en.pdf [2017-12-19].

12.2 Modern concepts of ethics

261

12.2.8 Weltethos or universal ethics There is a group of moral norms which appear in various ethical systems associated with various religious and cultural traditions, e.g., the obligation to respect human life, the duty to care for offspring, the obligation to help people affected by a cataclysm, the obligation to care for ill or disabled persons. Example 12.6: Christianity considers human life sacred because it is a gift of God; Buddhism sees taking someone’s life as inflicting the greatest suffering; the Enlightenment doctrine of human rights, assuming that everyone is an equal member of the human community, implies a conclusion that human life is a fundamental value. From these three ethical justifications, referring to various world-view paradigms, one can derive a common moral proscription: “Do not kill.”

Example 12.7: The golden rule of conduct – in the everyday language expressed in the form of a negative recommendation “Do not do to others what you would not have them do to you” – is known to many religious traditions, including Buddhism, Hinduism, Confucianism and Judaism. Its status is particularly high in the Judaic tradition: “What is hateful to you, do not to your neighbour: that is the whole Torah, while the rest is the commentary thereof; go and learn it”52. In the New Testament, it takes on the form of a positive recommendation: “Therefore, all things whatsoever that you wish that men would do to you, do so also to them”53, which can be safely applied in everyday life only in conjunction with the commandment of love.

The fact that people, despite their cultural differences, recognise the same basic moral beliefs to such an extent that this contributes to a state of relative social equilibrium seems to indicate that: – there might be some biological determinates of moral behaviour, formed in the process of evolution of species; – the same solutions are found for similar social problems under different historical and cultural circumstances. Both hypotheses are currently under scientific scrutiny. The observation that various ethical reasoning methods in many situations lead to the formulation of the same moral principles may indicate the possibility of defining a certain ethical minimum common to the whole world. Such an attempt has been made by the Swiss Catholic theologian Hans Küng (*1928), and is known under the German name Weltethos 54.

52 English Babylonian Talmud – Mas. Shabbath 31a: 13–14. 53 The Holy Bible Catholic, Public Domain Version 2010, Matthew 7: 12. 54 H. Küng, Projekt Weltethos, Piper Verlag, München 1990.

13 System of values associated with science 13.1 Preliminary considerations The language of axiology seems to be an adequate tool for discussing both methodological and ethical aspects of technoscience. Therefore, it has been used in this chapter to establish a logical bridge between philosophy of science and research ethics. The founder of the Italian school of philosophy of science Evandro Agazzi (*1934), in his 2004 book, indicated rigour and objectivity as the most fundamental values of science1; the French philosopher Anastasios Brenner (*1959), in his 2011 book, listed precision, coherence, completeness, simplicity and fruitfulness as the values of science most frequently referred to by researchers 2. Research integrity seems to be an umbrella concept which encompasses a much richer set of values than the sets indicated by Evandro Agazzi and Anastasios Brenner. It is, however, a concept lacking generally accepted definition and usage, mainly due to the plurality of the meanings of the concept of integrity 3 which may be understood as: (1) the integration of self, (2) the maintenance of identity, (3) standing for something, (4) a moral purpose and (5) a virtue 4. According to the 2012 declaration of UK universities, the constitutive values of research integrity are: – “Honesty in all aspects of research, including in the presentation of research goals, intentions and findings; in reporting on research methods and procedures; in gathering data; in using and acknowledging the work of other researchers; and in conveying valid interpretations and making justifiable claims based on research findings. – Rigour, in line with prevailing disciplinary norms and standards: in performing research and using appropriate methods; in adhering to an agreed protocol where appropriate; in drawing interpretations and conclusions from the research; and in communicating the results. – Transparency and open communication in declaring conflicts of interest; in the reporting of research data collection methods; in the analysis and interpretation of data; in making research findings widely available, which includes sharing negative results as appropriate; and in presenting the work to other researchers and to the general public.

1 E. Agazzi, Right, Wrong and Science: The Ethical Dimensions of the Techno-scientific Enterprise, 2004, Chapter 1. 2 A. Brenner, Raison scientifique et valeurs humaines – Essai sur les critères du choix objectif, Presses Universitaires de France, Paris 2011, p. 4. 3 from Latin integritas = “soundness” or “wholeness” or “completeness”. 4 D. Cox, M. La Caze, M. Levine, “Integrity”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Spring 2017 Edition, https://plato.stanford.edu/cgi-bin/encyclopedia/archinfo.cgi?entry=in tegrity [2018-06-08]. https://doi.org/10.1515/9783110584066-013

264

13 System of values associated with science

– Care and respect for all participants in and subjects of research, including humans, animals, the environment and cultural objects. Those engaged with research must also show care and respect for the stewardship of research and scholarship for future generations”.5 Similar, but not identical, list of values may be found in the definition of research integrity, provided in the document of the US Office of Research Integrity6. The Belgium philosopher Jan De Winter, in his 2016 book, proposed to distinguish four kinds of integrity in science: the moral integrity of scientists and their institutions, the moral integrity of the research process, the epistemic integrity of scientists and their institutions and the epistemic integrity of the research process 7. This line of thinking will be followed in the overview of science-related values, provided in this chapter. In Section 13.1, a tentative classification of values related to science and their general characteristics are presented, while in Section 13.2, the issues of possible conflicts among these values are discussed.

13.2 Typology of values 13.2.1 General characteristics of values Today’s technoscience is still under influence of two traditions: the tradition of science oriented on cognition, which may be traced back to Plato and Aristotle, and the tradition of science oriented on practical goals, initiated by Francis Bacon. Since technoscience serves both cognitive and practical purposes 8, it is guided by both epistemic values, associated with its cognitive goals, and by utilitarian values associated with practical goals. These are two sets of values that, in the historical development of science, have been recognised as particularly important for its long-term productivity. There is, moreover, a certain set of ethical values, cultivated for the same reason, and a set of social values, important for the functioning of science in society. The identification of those four sets of values does not mean the strictly disjoint classification of science-related values because many of these

5 The concordat to support research integrity, Universities UK, 2012, http://www.universitiesuk.ac. uk/highereducation/Documents/2012/TheConcordatToSupportResearchIntegrity.pdf [2018-06-08], p. 11. 6 S. G. Korenman, Teaching the Responsible Conduct of Research in Humans, Office of Research Integrity, USA 2006, https://ori.hhs.gov/education/products/ucla/default.htm [2018-06-08], Chapter 1. 7 J. De Winter, Interests and Epistemic Integrity in Science: A New Framework to Assess Interest Influences in Scientific Research Processes, Lexington Books, London 2016, p. 84. 8 D. Elgesem, “Information Technology Research Ethics”, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge (UK) 2008.

13.2 Typology of values

265

values may belong to more than one of those sets. There is no general agreement as to the criteria for selection and classification of science-related values. The advocates of “axiological neutrality” of both basic and applied sciences are, for example, inclined to ignore social values, while those thinkers who reject that postulate stress the importance of social, political and economic values which differentiate the realms of basic and applied sciences. Example 13.1: Already in the 1920s, France, Japan, the UK, the USA and the Soviet Union started their major research programmes devoted to biological weapons. This would have been impossible without the active leadership and cooperation of scientists being experts in various aspects of such research. The answer to the question “How do scientists, who are educated to help humanity, justify the use of their knowledge for the explicit goal of killing civilians?” depends on their philosophical orientation. The postulate of axiological neutrality of science may be a convenient justification9.

The identification of four sets of values associated with technoscience is necessary for describing fundamental axiological problems faced by knowledge society, in particular: – the problems generated by the tension between the pursuit of cognitive and practical goals, and by the difficulty of choosing priorities in the pursuit of the latter; – the issues related to individual and collective responsibilities for the future of humanity whose fate depends today on the development of technoscience to a greater extent than ever before. The differentiation between cognitive goals of science (knowledge, explanation and understanding) and its practical goals (purposeful processing of mass, energy and information) is important for methodological reasons: research programmes and projects primarily oriented on cognitive goals, should be designed and organised differently than those primarily oriented on practical goals. Moreover, this differentiation may be helpful in resolving delicate demarcation problems10. The distinction between cognitive and practical goals of science translates, in particular, into the distinction between the system of values associated with traditional academic science and the system of values associated with traditional industrial science. Academic staff of a university usually has a large freedom to choose both research topics and collaborators. Traditionally, it is supposed to be motivated by the need to acquire knowledge and serve society – rather than by the benefits of possible commercialisation of this knowledge. An employee of the research and

9 J. Guillemin, “Scientists and the History of Biological Weapons”, EMBO reports, 2006, Vol. 7, pp. S45–S49. 10 D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, Oxford University Press, Oxford (UK) 2007, pp. 38–40.

266

13 System of values associated with science

development unit of an industrial enterprise must perform tasks resulting from the development plans of this enterprise, using only the research infrastructure and information resources of this enterprise, establishing internal and external cooperation in accordance with the formal procedures defined in that enterprise. His motivation is closely related to the business goals of the enterprise, not to the need of acquiring knowledge for itself. These differences have been somewhat blurred in the times of technoscience: on the one hand, the model of the university has evolved towards the model of the enterprise; on the other, the margin of free creative activity in the industrial research-and-development units has significantly widened, especially in large concerns which are not only interested in short-term market success but also in a long-term development. There is also a mutual interpenetration of both sectors of science, because – more and more frequently – members of academic staff are in various forms employed in industry, and industrial researchers are increasingly involved in the projects coordinated by universities. Supporters of the axiological neutrality of science oppose to mixing social and (especially) political values to science. The development of technoscience over the last century has demonstrated, however, that the consistent implementation of this postulate would be dangerous, that these values must be, therefore, present also in the research process itself. A researcher should take into account the possible consequences of a cognitive error, which can affect the society at the stage of practical application of research results; the more severe the consequences may be, the greater is the level of certainty of research results he should strive for. The British philosopher Philip S. Kitcher (*1947) even claims that there is no significant difference between basic and applied research, between science and technology: both basic research and applied research aim the truth – not any truth, but essential truth – and the question about relevance is settled at the stage of selecting research problem, where social and political values must be taken into account; there are, of course, non-negligible quantitative differences between scientific disciplines and even between individual research projects11. The values associated with technoscience can be discussed directly in the language of axiology, or indirectly in the language of virtues, expected of researchers or in the language of norms or principles of operation that they should follow. The choice of the narrative method is decided by linguistic convenience rather than by any substantive considerations; all these three methods will be used in further parts of this subsection. Thus, the classification of values entails a corresponding, not always strict, classification of the norms of scientific activity, which are necessary for harmonious cooperation of various scientific milieus, aimed at disciplinary and interdisciplinary objectives, as well as at making science serve humanity12. The

11 D. Elgesem, “Information Technology Research Ethics”, 2008, pp. 360–362. 12 cf. D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, pp. 36–38.

13.2 Typology of values

267

classification of values entails also a corresponding, not always strict, classification of the virtues whose development and proliferation we expect of scientists. These are both intellectual and ethical virtues (in the sense of the typology introduced by Aristotle). Among them, the intellectual virtue of wisdom has played a special role for centuries: a luminary of science has been always expected to be able to wisely discuss the status and role of science in culture. Today, it seems to lose its historically privileged significance in favour of such anti-virtues as cynicism and cunning, or even pride of being ignorant beyond one’s own field of expertise. We have been more and more frequently confronted with scientists who – being renowned experts in their narrow, often very hermetic areas – are getting intellectually helpless outside of those areas. A scholar status obliges to wisdom because one cannot do science honestly and reliably without wisdom. 13.2.2 Epistemic and utilitarian values The truth, attained using the scientific method as described in Chapters 4–10, is the core epistemic value of science. The American bioethicist David B. Resnik (*1964) has formulated several recommendations whose implementation in research practice should help in approaching the scientific truth. These are, in fact, the criteria of abduction, according to which, when choosing hypotheses for testing, we should prefer those among them which can be verified or falsified by means of an experiment, are internally consistent, are consistent with well-established scientific theories, are consistent with available observations and measurement data, can be formulated precisely, are simple or intellectually elegant, are more general, in the sense of the scope of applicability than alternative hypotheses and contain original ideas13. Not surprisingly, these recommendations are expressed in the language of such peripheral values as precision, accuracy, originality, simplicity, elegance or generality – the values which backup the scientific truth. The objectivity, of science or of a scientist, is another core value of science, closely related to the truth. An objective scientist is expected to be free from the influence of emotions, superstitions and non-scientific beliefs distorting his thinking14. This state can be attained by an individual scientist only to some limited extent. Objectivisation of the result of cognition – i.e. the elimination of the influence of emotions, superstitions and false beliefs – is possible only in the process of its intersubjective investigation. Therefore, it seems to be more reasonable to speak about “cognitive objectivity of science” or “scientific objectivism” rather than about the “scientist’s objectivity”. David B. Resnik has distinguished four different types of scientific objectivity, viz.:

13 cf. ibid., pp. 49–53. 14 ibid., p. 50.

268

13 System of values associated with science

– descriptive realism which holds that science accurately describes a mindindependent reality, – normative realism which holds that science ought to describe a mind-independent reality, – descriptive rationalism which holds that science is unbiased, – normative rationalism which holds that science ought to be unbiased.15 This distinction is worth remembering because the terms objectivity and objective are used in the books on the scientific method and research methodologies in various meanings, sometimes without any definition or explanation. For at least half a century, many philosophers, sociologists and historians of science have questioned the objectivity of science, claiming that science cannot be objective because of the imperfection of human nature, especially of human cognitive capabilities. They have been replied with a core counterargument of realists that the achievements of technoscience – such as flight to Moon or heart transplants – cannot be accidental: the results of scientific inquiry must contain some truth about reality, making possible those achievements.16 The objectivity of science depends on many factors such as the state of knowledge, acknowledged research methodology, available research tools, epistemological and practical norms in force or research skills of scientists. The shortage of objectivity among the latter may result from intentional departure from research standards, or it may be conditioned by some subconscious factors17. Insufficient resistance of a scientist to the temptations of money, power and prestige – as well as to institutional, political or social pressures – is the most important threat to the objectivity of science. The best norms and methods or research will not help if those temptations and pressures push scientists to become involved in conflicts of interest, and to depart from the path of integrity, e.g. by falsifying data or committing plagiarism. With growing incidence of such pathological phenomena, science will lose public confidence, and – in a longer run – financial support.18 The objectivity of science has been also endangered by certain sociological phenomena related to the recent evolution of science, which may be characterised by the diminishing importance of curiosity as its driving force, and waning significance of its cognitive outcomes in favour of its directly applicable results. Under such circumstances, the recognition of the academic milieu is getting more and more important – as a social substitute for the cognitive value of the obtained research result. Since prominent representatives of this milieu decide about reviews, awards, allocation of funds and academic promotions, a conviction is growing that the professional success is determined by sociological factors rather than by any substance-related achievements, 15 16 17 18

ibid., p. 55. ibid., pp. 59–67. ibid., p. 78. cf. ibid., pp. 74–75.

13.2 Typology of values

269

i.e. new theories, ideas, discoveries, inventions, solutions, syntheses of knowledge, etc. This is not a marginal but quite common phenomenon in the twenty-first century: modern science is getting filled with vain and sterile knowledge. 13.2.3 Ethical and social values In 1942, the American sociologist Robert K. Merton (1910–2003) proposed a set of four values being, in his opinion, of fundamental normative importance for the functioning of science, viz.: communalism, universalism, disinterestedness and organised scepticism. This set of values, labelled with the acronym CUDOS, is generating the following standards for the functioning of science: – Communalism obliges a scientific community to make scientific knowledge available to all members of that community, and to remove all obstacles which might hinder the exchange of information among them. – Universalism obliges a scientific community to use exclusively substancerelated criteria for evaluation of any scientific achievement, and to refrain from its discrimination based on non-scientific beliefs or prejudices against the race, religion or social background of its author(s). – Disinterestedness obliges the members of a scientific community to refrain from adapting research results to their financial, political or ideological interests. – Organised scepticism obliges the members of a scientific community to expose all scientific achievements to the criticism of that community, and to refrain from using authority or tradition as a criterion for the final acceptance of those achievements.19 The above norms are predominantly of ethical nature because they refer primarily to social interactions within the scientific community, not to the research process itself. However, their epistemic aspect, most perceptible in the case of the last norm, is also quite important. The communalism of scientific achievements, postulated by Robert K. Merton is, in principle, not limited to the scientific community since published results of research are available to everyone. The openness of science, understood in this way, is a relatively new phenomenon since it started to be considered a value only in modern times. It was promoted by Francis Bacon already in the early seventeenth century 20, but a large-scale programme of broadening social access to the resources of scientific knowledge was initiated by the French encyclopaedists only in the eighteenth century.

19 R. K. Merton, “The Normative Structure of Science”, [in] The Sociology of Science: Theoretical and Empirical Investigations (Ed. R. K. Merton), University of Chicago Press, Chicago 1973 (first published in 1942). 20 N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, 2002, p. 232.

270

13 System of values associated with science

This programme has been implemented up to now, increasingly on a global scale, and is including such powerful projects as Wikipedia, Google Scholar or Open Access. The generality of norms postulated by Robert K. Merton, being their unquestionable advantage from a logical point of view, is a significant limitation of their direct applicability. That’s why the documents concerning ethics of scientific research – including codes of research ethics and codes of good academic practices – often contain more extensive catalogues of science-specific values. David B. Resnik, for example, has formulated the following recommendations which indirectly define such a catalogue: – Be honest in all scientific communications; do not fabricate, falsify or misrepresent data and research results; do not plagiarise. – Avoid negligence errors; carefully and critically scrutinise your own work; keep good records of all of your research activities; use research methods and tools appropriate to the topic under study. – Eliminate personal, social, economic and political biases from experimental design, testing, data analysis and interpretation, review and publication. – Share ideas, theories, tools, methods, data and research results; be open to criticism, advice and inspiration. – Support freedom of thought and discussion in the research community; do not interfere with scientists’ liberty to pursue new avenues of research or to challenge generally accepted theories and assumptions. – Give credit where credit is due. – Honour intellectual property and collaboration agreements; do not use unpublished data, research results or ideas without permission. – Respect the rights and dignity of your colleagues and students; treat them fairly; do neither discriminate against them nor exploit them. – Treat human and animal research subjects with respect, protect their welfare; do not violate the rights or dignity of human subjects. – Enhance your professional competence and expertise through life-long education; promote such an attitude and report incompetence. – Protect confidentiality of research-related communication. – Obey relevant laws and regulations. – Strive to benefit society and protect it against possible harm that could be entailed by research. – Make fair and effective use of scientific resources; do not destroy, abuse or waste them. 21 It seems that the above recommendations do not require any justification: they are mostly mirroring ethical principles we are expected to follow in everyday life. All of them, to a greater or lesser extent, aim fostering trust bonding the scientific

21 cf. D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, pp. 45–48.

13.2 Typology of values

271

community and being a sine qua non condition of its survival and productivity. Due to the crucial importance of trust in the functioning of science, its role in science has been discussed in a separate subsection (Subsection 13.2.4). An important ethical value, being not explicitly mentioned by Robert K. Merton, is the freedom of science in all its aspects. In his famous article on the subject – whose first version appeared in 1946, and the second in 1957 – the renowned Polish philosopher and logician Kazimierz Ajdukiewicz (1890–1963) distinguished four kinds of freedom important for science: freedom of speech, freedom of thought, freedom to choose research methods and freedom to choose research topics. He pointed out two types of restrictions of freedom of speech: – depriving a person of the possibility to say what he wants to say; – forcing a person to say what he does not want to say, for example, by arranging a situation in which it is known that any answer to a question, other than expected one, can trigger repression. He explained, moreover, that freedom of thought means that one has the right to believe in everything what is supported by rational arguments, and one is not obliged to believe in anything what is not supported by such arguments 22. Social argumentation in favour of freedom of speech indicates that it serves the pursuit of truth and the development of democracy 23. Freedom of speech, however, must often be subject to ethical constraints, sometimes supported by law, because of the time, place and form of a message or because of its content. The latter constraints may be controversial, especially if they are motivated by the worldview (the prohibition of racist utterances could be an example)24; such constraints in science are known from the history of totalitarian regimes. An example of a worldview-neutral constraint may be the prohibition of political advertisements in Norwegian television, justified by a conviction that the language of politicians is inherently manipulative 25. Trade secrets, military secrets and state secrets are other examples of constraints imposed on freedom of speech for the sake of industrial productivity or state security; they may affect technoscientific research to an extent which is growing with the advancement of knowledge-intensive technologies. Freedom of speech and freedom of thought belong to fundamental human rights26, while freedom to choose research methods and freedom to choose research topics are specific to science. Already a century ago, the German sociologist K. E. Max

22 K. Ajdukiewicz, “Co to jest wolność nauki?”, Życie Nauki, 1946, No. 6, pp. 417–426; K. Ajdukiewicz, “O wolności nauki,” Nauka Polska, 1957, Vol. 19, No. 3, pp. 1–20. 23 D. Elgesem, “Information Technology Research Ethics,” 2008, p. 362. 24 ibid., p. 366. 25 ibid., p. 368. 26 affirmed, for example, in The Universal Declaration of Human Rights adopted by the United Nations General Assembly on December 10, 1948.

272

13 System of values associated with science

Weber (1964–1920) expressed an influential opinion that while the interference of social and political factors (or values) at the stage of research itself is not acceptable, it is justified both at the stage of choosing research topics and at the stage of choosing applications of research results 27. The increasing share of public funds in overall research spending has been a serious argument strengthening this position. Today, the interference of socio-political factors is not limited to technoscience-related reflection, but it translates, more and more often, into legal regulations concerning the structure of financing various types of research from public resources, the research priorities, or even specific research topics. This interference may also consist in a ban on conducting a certain type of research. In many Western countries, for example, such bans relate to research that requires experiments on humans, which may expose the latter to high risk of harm. Some authors believe that this is the only justified legal restriction of the freedom to choose research topics 28. Example 13.2: Both in the European Union and in the USA, research on the cloning of human organisms, tissues and embryos for medical purposes is subject to various legal constraints. The Charter of Fundamental Rights of the European Union29 prohibits cloning for reproductive purposes. The full ban on cloning is in force in some states of the USA; there is no nationwide ban, but there is a ban on financing from federal funds any research on cloning human beings, as well as a ban on carrying out such research in research centres supervised by federal authorities. The most liberal regulations in this respect are in force in the State of Nevada; that’s why most of the companies interested in cloning-related research register their activity there. The Catholic Church is opposing to any research concerning cloning human organisms, tissues and embryos30.

Example 13.3: In January 2013, a total ban on the use of great apes (gorillas, chimpanzees and orangutans) in scientific research, as well as of limiting the use of smaller apes to emergency situations, went into effect in the European Union on the basis of the 2010 EU Directive31. The need to introduce such a ban is justified, on the one hand, by ethical considerations (analogous to those that apply to experimentation on humans) and, on the other, by the need to protect endangered species and preserve natural diversity.

27 D. Elgesem, “Information Technology Research Ethics”, 2008, pp. 358–359. 28 e.g.: ibid., p. 363. 29 The Charter of Fundamental Rights of the European Union, European Communities, 2000, http:// www.europarl.europa.eu/charter/default_en.htm [2018-02-14], Article 3. 30 Instruction ‘Dignitas personae’ on certain bioethical questions, Congregation for the Doctrine of the Faith, Vatican 2008, http://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ con_cfaith_doc_20081208_dignitas-personae_en.html [2018-02-15]. 31 “Directive 2010/63/EU of the European Parliament and of the Council, of 22 September 2010, on the protection of animals used for scientific purposes”, Official Journal of the European Communities, 2010, No. L276, pp. 33–79.

13.2 Typology of values

273

13.2.4 Trust in science Trust is an autonomous topic of specialised philosophical works – to mention only the books authored by the Danish philosopher and theologian Knud E. C. Løgstrup (1905–1981)32 and the British philosopher Onora S. O’Neil (*1941)33. It seems, however, that the extraordinary importance of trust in science may be better articulated by a developed example of a “spectacular” trust abuse in social psychology than by any abstract philosophical deliberation over this issue. Example 13.4: Till April 2018, the vast majority of social psychologists had believed that the famous Stanford Prison Experiment (SPE) demonstrated that “normal” people could behave in a hostile and sadistic way under influence of circumstances. The SPE was carried out at Stanford University (California, USA) in 1971 by a psychology professor Philip G. Zimbardo (*1933). According to the protocol of this experiment, a fake jail was arranged in a basement and stocked with nine “prisoners” and nine “guards” – college-age respondents to a newspaper advertisement, who were assigned their roles at random; Philip G. Zimbardo with a group of assistants played the role of the senior prison “staff”. According to the descriptions of the SPE, repeated over almost 50 years by Philip G. Zimbardo himself and the authors of numerous psychology textbooks, the guards acted uninstructed, and the results showed that the SPE participants spontaneously “learned” their assigned roles, with some guards enforcing authoritarian measures and ultimately subjecting some prisoners to psychological torture, while many of the prisoners passively accepted psychological abuse and, by the guards’ request, actively harassed other prisoners who tried to stop it. The SPE was subject to methodological criticism in the years after it was conducted, expressed, in particular, by the famous American psychologists and philosophers Erich S. Fromm (1900–1980) and Leon Festinger (1919–1989), as well as by the Australian psychologist S. Alexander Haslam (*1962) and British psychologist Stephen D. Reicher, who co-directed an attempted replication of the SPE in 2001. Despite this criticism, Philip G. Zimbardo became the most prominent living American psychologist, and his narrative of the SPE appeared in almost every introductory psychology textbook published all over the world. Undoubtedly, his self-promotion skills contributed to the unusual fame of the SPE. To avoid peer review, he published the first article about the experiment not in an academic journal of psychology but in The New York Times magazine. To avoid confrontation with experts in social psychology, he published his first academic article about the SPE in International Journal of Criminology and Penology, not in a psychology journal. When S. Alexander Haslam and Stephen D. Reicher attempted to publish their findings, which did not confirm the SEP conclusions, he intervened, and British Journal of Social Psychology published their article with his commentary accusing them of “fraudulent” behaviour. During almost 50 years, Philip G. Zimbardo invested a lot of efforts in promoting his research findings. He became the primary author of one of the field’s most popular and long-running textbooks, Psychology: Core Concepts34, and the host of a 1990 PBS video series, Discovering Psychology, which gained wide usage in high schools and colleges; both featured

32 e.g. K. E. Løgstrup, The Ethical Demand, University of Notre Dame Press, Notre Dame (USA) 1997 (translated from Danish by H. Fink). 33 e.g. O. O’Neil, A Question of Trust: The BBC Reith Lectures 2002, Cambridge University Press, Cambridge (UK) 2002. 34 P. G. Zimbardo, R. L. Johnson, V. McCann, Psychology: Core Concepts, Books a la Carte Edition, Prentice Hall PTR, USA 2016 (8th edition).

274

13 System of values associated with science

the SPE. In 2007, he published a book, entitled The Lucifer Effect 35, offering more details of the experiment than had been disclosed ever before, though framed in such a way as to avoid calling his basic findings into question. The social impact of the incorporation of the SPE findings into the body of psychological knowledge has been multifaceted. If ethical aspect is concerned, they enabled “good” people, who turned “evil” under pressure of certain circumstances, to get rid of responsibility for harm they had done. This is a probable reason of the popularity of the Philip G. Zimbardo’s narrative in Germany and Italy, on the one hand, and in Poland and Hungary, on the other – i.e. in such countries which, for historical reasons, needed a rational explanation for why so many people had treated others with the highest cruelty. In April 2018, the French filmmaker Thibault Le Texier published the book in French whose title may sound in English as History of a Lie: Debunking the Stanford Prison Experiment 36. Based on the results of a thorough investigation in the archives of the experiment and on interviews with several participants of the experiment, he concluded that the SPE was manipulated, viz.: – the guards knew what results were expected from them, were supervised by the staff and followed a set of rules defined by the staff; – the staff made the guards believe they were not subjects of the experiment; – the prisoners were conditioned by the staff, and not allowed to leave the experiment at will; – the experimental data were not recorded properly, and the experiment was reported inaccurately; – the conclusions were formulated a priori according to non-academic aims. The scandal surrounding the SPE will probably have long-term negative consequences for public trust in science and its values. The highest price will be paid by young people who bound their future with the profession of psychologist. They will meet on a daily basis with distrust, and many of them will suffer an undeserved social punishment 37.

To meet professional standards, any scientist should fulfil diverse obligations with respect to himself, with respect to the technoscientific community, and with respect to society 38. The obligations of the first category result from the need to ensure productivity of his own research efforts, the obligations of the second category – from the need to ensure productivity of the research endeavours of the whole community, and the latter – from the key importance of technoscience for the well-being and future development of society. The commitments to the technoscientific community are implied by the collective nature of the technoscientific enterprise, and they

35 P. G. Zimbardo, The Lucifer Effect: Understanding How Good People Turn Evil, Random House, New York 2007. 36 T. Le Texier, Histoire d’un mensonge: Enquête sur l’expérience de Stanford, La Découverte, Paris 2018. 37 The contents of this example has been based on: B. Blum, “The Lifespan of a Lie”, Medium, June 7, 2018, https://medium.com/s/trustissues/the-lifespan-of-a-lie-d869212b1f62 [2018-07-06]; T. Le Texier, (interviewé par G. Salle), “Anatomie d’une fraude scientifique: l’expérience de Stanford”, Contretemps, 28 avril 2018, https://www.contretemps.eu/entretien-texier-experience-stanford/ [2018-07-05]; T. Witkowski, “Lucyfer, którego wymyślił Zimbardo”, W obronie rozumu, 2 lipca 2018 r., https://wp. me/p48IO4-qj [2018-07-06]. 38 On Being a Scientist: Responsible Conduct in Research, 2009, p. 2.

13.2 Typology of values

275

are associated with the need to cultivate and nurture trust in this community. The productivity of any non-trivial collective effort in science – both in micro- and macro-scale – depends on trust: the reader of an article must trust its author; the coordinator of a research project must trust his collaborators – these are just two simple examples. Since technoscience cannot properly function without trust, any act undermining trust is considered an infringement of research ethics. The edifice of science will collapse when trust is abused39. What is trust? To trust a person means more than to rely on that person, and to rely on a person means to have a rationally justified conviction that this person will behave in certain situations according to our expectations, e.g., when saying “We rely on Peter as a car mechanic”, we express a rationally justified conviction that Peter is able to repair our car. To trust a person means more because trusting is, as a rule, justified not only by rational but also by some emotional premises. We refer to those premises, e.g. when saying: “I trust my wife”. It is usually about truthfulness, word-keeping or help when we are inclined to use trust-related wording. 40 Trust in science is, therefore, a conviction – supported by both rational and emotional premises – that other scientists act in accordance with a generally accepted canon of the scientific method, with the research standards specific to a given scientific discipline and with the principles of research ethics. Can a scientist, who deviates from those normative settings, trust other scientists? Example 13.5: According to an old tradition, each inhabitant of an African village, coming to the birthday party of the village leader, was supposed to bring a little vat of coconut wine and pour it into a large barrel in front of the leader’s house. One of the inhabitants mused: “If I pour a vat of water, nobody will notice”; his neighbour came to a similar conclusion. . . When the day of celebration came, the inhabitants of the village were raising birthday toasts with water. . .

Trust in science is rational because it is justified and supported by accumulated past experience of using and mastering methodologies for selection of scientific personnel, for intersubjective testing of scientific knowledge and for managing scientific information. However, all these three “props” of trust may fail, and sometimes fail – perhaps today more often than a hundred years ago – due to increasingly intense attempts to replace them with bureaucratic and business-type forms of management, on the one hand, and due to evolution of technoscience dictated by the development of information technologies, on the other. There is no doubt that the problem of trust in science requires iterative reconsideration in the context of those new circumstances.

39 cf. ibid., p. IX. 40 cf. P. Pettit, “Trust, Reliance, and the Internet”, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge (UK) 2008.

276

13 System of values associated with science

Example 13.6: The internet offers many new opportunities for establishing and maintaining interpersonal relationships, e.g. the possibility of an intensive exchange of newly generated scientific information, in particular – among members of virtual research teams working on scientific projects. At the same time, however, it does not necessarily support trust 41. The lack of personal contact impairs our ability to get a rational, but emotionally supported, conviction about the loyalty or other qualities of people with whom we communicate on the internet 42.

13.3 Conflicts of values Moral values may enter in conflicts even on the grounds of a consistent ethical system. The risk of conflicts is, however, higher if they belong to different ethical systems; this is not unusual in a pluralistic society or in a multi-cultural scientific community. Moral values can also enter in conflict with non-moral values, such as economic or aesthetic values. Can ethical theories be helpful in resolving them? The problems usually begin when morality requires of us to sacrifice non-moral values, e.g. by giving priority to social before individual interests or benefits. Utilitarianism can require of us to sacrifice something in the name of the principle that the good of one person is as valuable as the good of any other; paradoxically, Kantian formal ethics may support similar conclusion! This is the motivation behind various attempts to look for a less demanding morality that would make easier the realisation of certain non-moral values. 43 Conflicts of values related to science, as already demonstrated in their overview, are inevitable. The remedy would be their strict hierarchy, but its construction is impossible, even if the set of values to be ordered is limited to homogeneous, e.g. epistemic or ethical, values only. The Italian philosopher of technoscience, Evandro Agazzi, is sceptical about the possibility to decide “once and for all” whether truth (the central value of science) should be located in this hierarchy below or above such values as utility, social progress or political freedom. In his opinion, these and other values are important, and the real problem is not to put them on a linear scale, but to optimise their mutual relations 44.

41 cf. ibid., p. 161. 42 cf. ibid., p. 173. 43 cf. S. Robertson, “Reasons, Values, and Morality”, 2010, pp. 441–442. 44 E. Agazzi, Right, Wrong and Science: The Ethical Dimensions of the Techno-scientific Enterprise, 2004, p. 177.

13.3 Conflicts of values

277

Example 13.7: The Polish ethicist Tadeusz Styczeń (1931–2010) has confronted the dignity of a person (an ethical value) with scientific truth (an epistemic value). He has concluded that in the name of affirmation of the dignity of a person, we must not reduce that person to any other good, even as important as truth, but at the same time we should affirm goods, such as truth, without which it is impossible to fully affirm that person 45.

The clash of epistemic and utilitarian values entails conflicts of values, which did not seem to exist a hundred years ago. In research practice, they translate into various tensions between representatives of basic and applied sciences, between academic and industrial institutions of science, between theoreticians and practitioners. A simple recommendation formulated by Willard V. O. Quine – to give higher priority to epistemic values (and to the norms and criteria of evaluation they generate) in basic sciences, and to utilitarian values in applied sciences46 – can hardly be implemented in technoscientific research due to merging of basic and applied contents. The cooperation between academic and non-academic institutions may be traced back to the nineteenth century, when German chemical enterprises began to employ scholars to develop synthetic dyes. Today, such cooperation includes not only industrial enterprises but also institutions of other sectors of economy and culture. Despite century-long development, it generates numerous tensions related to the discrepancy between the system of values specific to academe and the system of values specific to business and industry where research results are applied. Here are the essential aspects of this discrepancy: – The primary aim of an academic institution is to educate students, and to generate and disseminate scientific knowledge, while the aim of an industrial enterprise is to produce material goods and provide services in a way maximising its profit. – An academic institution is open to the free exchange of ideas and data, while an enterprise is interested in keeping business and technology secret. – An academic institution promotes the freedom of research, speech and thought, while an enterprise is rather interested in subordinating researchand-development activities and related information policies to its business priorities. – An academic institution focuses on the objectivity of scientific research, while an enterprise, even if it declares some interest in this value, is rather guided by publicity reasons or requirements of law. – Scientific knowledge has still some intrinsic value for an academic institution, while an enterprise is essentially interested only in its direct usability.

45 T. Styczeń, “Czy istnieje etyka dla naukowca?”, Ethos, 1998, No. 44, pp. 75–83. 46 cf. D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, pp. 71–72.

278

13 System of values associated with science

– Many academic institutions, especially those financed from public funds, still do not have to fight, like enterprises, for a position on the free market, and are not obliged to account for their activities to anyone but financing agency. 47 The above-outlined tensions and conflicts of values are in the background of numerous disturbing phenomena, such as: – individual and institutional conflicts of interest which may jeopardise striving for objectivity; – expansion of the sphere of the secrecy in research which is threatening the implementation of the principle of openness; – distortion of proportions between basic and applied research which has negative impact, first of all, on the quality of doctoral studies; – pursuit of financial support from industry, quite often at the cost of striving for objectivity. The conflicts of interest, mentioned above, may be expressed in terms of the conflicts of science-specific values with extra-scientific values, such as wealth or fame. They do not violate, by themselves, any epistemic or ethical norms, but they may increase the risk of those norms violation. The level of this risk depends, of course, on the specificity of the conflict and on the moral and intellectual skills of the person involved in it. According to the author of the book The Price of Truth, the conflict of interest is getting dangerous when the probability of the corrupting impact of personal, financial, social or political factors on the scholar’s judgment exceeds 5%; the choice of this threshold has been justified by the practice of statistical inference where it is most commonly accepted as the level of statistical significance 48. Example 13.8: Here are some situations from everyday life of technoscientific milieus, where a conflict of interest is highly probable: – The rector of a university has some shares of a company that is sponsoring major research projects carried out at that university. – The professor of a medical academy has a stake in a pharmaceutical company that is financing his research on one of the company’s products. – The contractor of a research project, financed by a biomedical company, has obtained results contrary to the interest of the company; after their disclosure in the company, he is persuaded to give up their publication. – A scientific medical journal is advertising new medicines to increase its income. – A medical association is promoting the product of a company which has supported it with significant donations.

47 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), pp. 107–108. 48 D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, pp. 110–113.

13.3 Conflicts of values





279

A reviewer, employed in an industrial research institution developing new methods of monitoring telecommunications channels, is evaluating the grant application, concerning a research project on the same subject, submitted by a university professor. The university administration is harassing a technician who is trying to reveal the unethical practices of a research team bringing considerable income in the form of research grants.

These examples illustrate the diversity of conflicts of interest encountered in the world of science, also in the sense of the “topography” of their occurrence: the related problems are most frequently reported in the domains of biomedicine and pharmacy. It is estimated that ca. 75% of the leading scientists working in medicine-related fields are to some extent paid by pharmaceutical industry 49. At the same time, many published meta-analyses indicate that research funded by pharmaceutical companies is much more likely to produce results that are in line with the commercial interest of those companies than research financed from other sources 50. This is a clear indication of the systemic involvement of the employees of research-and-development units of pharmaceutical companies into a conflict of interest: the professional and financial dependence of such employees on the company management makes them more prone to ethically doubtful behaviours which serve the market success of the company. The situation of a university researcher, studying drugs within a project sponsored by their producer, is only slightly better. It should be noted, however, that there is nothing wrong in such an arrangement as long as the sponsor does not interfere in the research and publication process, as long as the sponsor does not make the amount and structure of financing dependent on the research results 51. Conflicts of interest threaten not only the objectivity of the research outcomes, but they can also be the cause of many ethical abuses to be analysed in the next chapters of this book, such as: fabrication and falsification of data, lack of due diligence in research and publication activities, plagiarism and other violations of intellectual property, as well as lack of respect for research subjects. On the one hand, conflicts of interest jeopardise the integrity of scholars; on the other, they undermine public confidence in the institutions of science and in research outcomes. Conflicts of interests may affect, through subconsciousness, the way researchers think and act in the research process. The identification of a conflict of interest is not always evident because not all relationships between the process of scientific thinking and political, business or personal factors are obvious. Scientific institutions try to resolve the issues related to conflicts of interest by requesting their employees to make declarations that facilitate identification of such

49 J. Virapen, Side Effects: Confessions of a Pharma-Insider: Death, Virtualbookworm.com Pub., College Station (USA) 2010, p. VIII. 50 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), p. 113. 51 cf. D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, p. 107.

280

13 System of values associated with science

conflicts; similar declarations are required by the editorial offices of many scientific journals – from the authors of manuscripts submitted to those offices. This practice is increasing the awareness of threats arising from conflicts of interest, and therefore it can reduce the incidence of troubles caused by ignorance in this respect. It seems, however, that such cases are not most frequent and most dangerous. . . Surveys conducted in the 1990s showed that ca. 30% of leading authors of publications in the field of applied sciences are financially interested in the research results presented in those publications. A typical case is a researcher cooperating with an industrial enterprise on a more-less permanent basis, holding shares of that enterprise or deriving income from patents belonging to that enterprise, interested in further research cooperation or in starting a spin-off company based on research results accumulated during cooperation. Not surprisingly, many such researchers acknowledged in the surveys their involvement in delaying or discontinuing the publication of research results to the benefit of a pending patent or negotiations related to patent licensing. 52 Increasing involvement of both American and European universities in the commercialisation of research results is primarily motivated by the need to seek additional sources of funding, due to declining public expenditures (per capita) on research and higher education and the growing costs of increasingly sophisticated research equipment. This phenomenon entails a systematic increase in the importance of applied research (often at the expense of basic research), the penetration of business spirit to academic milieus, intensification of competition in pursuit of research funding, and gradual replacement of intellectual competition with economic strife. The pursuit of research funding is a major factor of neglecting educational activities by academic teachers. The shortage of time, implied by bureaucratic duties associated with research projects (especially those financed from public money), can also lead to negligence in research work and even to doubtful practices concerning data processing. There are also reported cases of fraud committed under pressure of deadlines related to grant application or project closure 53. There are, moreover, observed, more and more frequently, problematic practices in spending research money and cases of unfair treatment of co-workers. Example 13.9: Here are three examples of morally doubtful behaviours which have been increasingly approved by some academic milieus: – Professor A, the vice-rector of a technical university, allocates overheads, resulting from a research project for industry, on educational process.

52 ibid., p. 6. 53 cf. ibid., pp. 182–183.

13.3 Conflicts of values





281

Doctor B, the owner of a patent for manufacturing insulation nanolayers, issues a negative opinion about a manuscript in which a competitive method for producing such nanolayers is proposed – to prevent the loss of income from his patent licensing. Professor C, the advisor of a Ph.D. student D, does not allow the doctoral thesis to evolve in line with the interests of the latter because he does not want to lose cheap workforce in a research project funded by industry.54

According to the best academic tradition, a university professor should protect academic freedom and other academic values, generate and disseminate new knowledge, synthesise existing knowledge and critically analyse its current status, as well as demonstrate its applicability in solving theoretical and practical problems. Today, a university professor is, first of all, expected to be involved in the process of raising funds by: – organising paid trainings and workshops addressed to the employees of industry and business, including a mandatory element of marketing research and educational services of the university; – participating in the work of advisory and supervisory boards of companies and promoting in this way research and education services of the university; – undertaking business activities, e.g. in the form of spin-off companies, in order to increase revenues of researchers. The members of academic staff are more and more often under pressure to act against the traditional openness of academic science, e.g. when completing research projects for industrial or military institutions which require secrecy. Opinions about the impact of such projects on scientific progress are ambivalent: on the one hand, their participants usually publish less and do not disclose research results in other ways, on the other, they materialise these results in the form of industrial products which may reach a much larger number of recipients than if they were only described in the publications 55. The members of academic staff are not only confronted with conflicts of interest but also with conflicts of obligation 56. They must deal with such conflicts almost every day when deciding: how much time should they devote to research work, and how much – to teaching; how much time should they devote to research sensu stricto, and how much – to related bureaucratic tasks; how much time should they spend in the lab, and how much – at the desk, describing the results of lab experiments. The intensity of such conflicts today is undoubtedly higher than in the past because the level of complexity of the research problems and of the methodologies

54 inspired by: S. G. Bradley, “Managing Competing Interests”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 159–185. 55 cf. ibid., pp. 160–161. 56 On Being a Scientist: Responsible Conduct in Research, 2009, p. 44.

282

13 System of values associated with science

for solving them has increased enormously, as well as the complexity of formal procedures for raising research funds, publishing, patenting and implementing research results57. Among the conflicts the members of academic staff are confronted with, the conflicts of conscience seem to be most difficult. They appear when a researcher is forced, by financial or institutional circumstances, to undertake research activities inconsistent with his moral convictions, for example, to evaluate a grant application for funding research on medical uses of stem cells58. Finally, it is worth emphasising that the analysis of a conflict of values – to have some practical significance – must take into account not only its objective axiological aspect but also its subjective psychological background: the values do not only belong to our cognitive system, but also play the role of psychological motivator of our actions.

57 cf. S. G. Bradley, “Managing Competing Interests”, 2005, p. 159. 58 cf. ibid., p. 164.

14 Principles of moral decision-making The practical goal of ethics is to develop intellectual tools that are useful for making decisions in matters of moral significance. This chapter is an attempt to provide a general outline of those tools. Section 14.1 is devoted to the analysis of the most difficult decision-making situations, called moral dilemmas; Section 14.2 – to an outline of the most frequently used patterns of ethical thinking in everyday life; and Section 14.3 – to a tentative assessment of the usefulness of formal decisionsupport tools applied in engineering, production management or warfare management.

14.1 Moral dilemma It is easy to be honest when being honest brings benefits, much harder – when it entails the loss of property, privileges or social admiration. The situation which implies the necessity to choose between two equally important values, when the choice of one of them is excluding the realisation of the other, is called dilemma1. In everyday language, a dilemma is understood as an undesirable or unpleasant choice, or a situation involving such a choice2; as a consequence, various difficult situation, without explicit clash of values, are frequently called dilemmas, as in the following sentence: “The president is clearly in a dilemma about how to tackle the crisis”3. In this book, however, the term dilemma will be used in its more restrictive sense referring to an explicitly identified conflict of values, and the term moral dilemma – if at least some of those values are of moral nature; so, in the sense as it is understood in the academic literature on metaethics and ethics4. The necessity to choose between right and wrong is, therefore, considered to be a false dilemma. The concept of moral dilemma is a subject-matter of philosophical controversies which concern not only the methods of its resolution but also its definition and typology. It seems that only a problem without a good solution (i.e. a solution that would be unquestionable from an ethical point of view) deserves the name of a moral dilemma. Such problems are logically impossible within a coherent and complete ethical theory. In decision-making practice, however, we do not – as a

1 from Greek di = “two” + lemma = “premise” or “anything received or taken”. 2 cf., for example, the definitions provided in the online dictionaries available at http://www.dictio nary.com/browse/dilemma [2018-07-21]. 3 from the online Cambridge Dictionary article “Dilemma” available at https://dictionary.cam bridge.org/dictionary/english/dilemma [2018-07-21]. 4 e.g. Ø. Kvalnes, Moral Reasoning at Work: Rethinking Ethics in Organizations, Palgrave Macmillan, New York 2015, Chapter 2. https://doi.org/10.1515/9783110584066-014

284

14 Principles of moral decision-making

rule – refer to such a theory, but rather take into account various theories, objective facts and subjective feelings of people confronted with moral dilemmas. Example 14.1: The role of subjective factors in the analysis and resolution of moral dilemmas may be illustrated with sociological studies referring to two thought experiments known under the names Trolley Dilemma (TD) and Fat-man Dilemma (FD). Their descriptions are as follows5: (TD) A speeding trolley is rolling on the track towards the switch, near which an accidental observer is watching the scene. If he does not take any action, the trolley will hit a group of five persons working on the left branch of the track, and kill them; if he turns the switch, the trolley will kill one person working on the right branch of the track. (FD) An accidental observer and an extremely obese man are watching the same scene when staying on a bridge over the switch. A speeding trolley is rolling towards that switch being set to the left branch of the track where five persons are working. If the observer does not take any action, then the trolley will kill all of them; if he pushes the obese man off the bridge at a right moment, then the body of that man will stop the trolley, and only that man will die. Large-scale surveys, aimed at examining reactions of “ordinary people” to the situations described, were carried out – first in the USA and next in Germany. When asked: “Would you turn the switch in the TD situation?”, 70–90% of the respondents answered: “Yes”. When asked: “Would you push the obese man off the bridge?”, 70–90% of them responded: “No”. Quick opinion surveys – made by the author of this book in several groups of students of various nationalities – have ended up with similar outcomes. The only factor, differentiating the TD and FD situations, is the direct perpetration of the death of a human being in the second case.

The ethics of virtues seems to be a natural platform for integration of objective and subjective components of thinking about moral dilemmas. On the basis of such ethics, the problems which cannot be solved by means of logical tools – so, in the realm of intellectual virtues – may find a solution in the realm of moral virtues, such as courage and determination. This is because truly complex and significant moral problems are almost never solved by calculation, but by cutting the Gordian knot, as it has been adequately noted by Evandro Agazzi6. A moral dilemma is a situation in which one should choose between alternative actions, but cannot undertake or avoid both; the characteristic elements of such situation are: a collision of duty, experience of powerlessness and uncertainty about the right decision, and the sense of guilt that one experiences after choosing one of the options. The philosophical dispute concerning the unresolvability of moral dilemmas, which implies the necessity of choosing morally evil acts, refers in fact to the dilemmas formulated under the assumption of the availability of complete

5 cf., for example: J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 4.17. 6 E. Agazzi, Right, Wrong and Science: The Ethical Dimensions of the Techno-scientific Enterprise, 2004, p. 160.

14.1 Moral dilemma

285

and certain knowledge about the potential consequences of alternative decisions. In many real situations, however, the doubts about the rightness of choice arise from the difficulties in foreseeing those consequences7. Moral dilemmas may emerge even if complete and certain knowledge about the potential consequences of alternative decisions is available due to the identity, incommensurability or incompatibility of the conflicting values. In the first case, a dilemma – which consists in the simultaneous obligation to respect and not to respect a certain value – is called symmetrical8. Example 14.2: A fisherman, sitting in a small boat on a lake, has noted two swimmers drowning at the opposite (equidistant) ends of the lake. He has realised that he can rescue only one of them. It seems reasonable to assume that he has some sort of obligation to attempt rescuing both swimmers, but when fulfilling his obligation with respect to one of them, he will inevitably fail to fulfil his obligation with respect to the other. How should he choose?9

The answer to the question, concluding the above example of symmetrical dilemma, depends on the evaluation of the conflicting obligations. If each of the obligations is an actual, in-force obligation, then by acting against any of them the fisherman becomes a wrongdoer, worthy of blame and censure, and subject to feelings of guilt. Since in the considered case he has inescapably to act against such an obligation, he is inescapably going to become a wrongdoer, tragically guilty and blameable. If, on the other hand, the obligations in this conflict are not actual, in-force obligations, but merely apparent or potential obligations of some sort, then (assuming that acting against merely potential (apparent) obligations is not wrongdoing) he may escape the tragic fate of becoming a wrongdoer, blameworthy and racked by guilt. Thus, the essence of moral dilemmas is the question of what the things in conflict are: actual in-force obligations or potential (apparent) obligation. The resolution of this theoretical issue implies the answer to a practical question: how should we regard a person who acts for the best in a moral conflict, and how should such a person regard herself?10 Serious moral problems, including symmetrical dilemmas, may disappear when the lacking knowledge is acquired – when, consequently, the problem under consideration is losing the status of moral dilemma because new alternative solutions appear, e.g. due to the development of science and technological progress.

7 B. Chyrowicz, O sytuacjach bez wyjścia, 2008, p. 159. 8 ibid., p. 180. 9 inspired by: T. B. Weber, “The Moral Dilemmas Debate, Deontic Logic, and the Impotence of Argument”, Argumentation, 2002, Vol. 16, No. 4, pp. 459–472. 10 ibid.

286

14 Principles of moral decision-making

Example 14.3: A symmetrical dilemma is faced by a physician who can save only one of the twins whose lives are endangered (assuming they have equal opportunities). An identical situation may be less dramatical due to the availability of new biomedical methods and techniques.

Two conflicting values are called incommensurable if they cannot be compared, i.e. none of them is better (more important, higher) than the other, but they cannot be considered equal. The conflicting values are called incompatible if they cannot be realised simultaneously11. Example 14.4: A plane with two hundred innocent passengers on board, kidnapped by a terrorist, is heading towards a skyscraper on Manhattan. If nothing stops it, not only the passengers but also the inhabitants of the skyscraper will die. For the commander of an anti-aircraft unit, who is to decide on the possible shooting down the aircraft, this situation may be or may not be a moral dilemma. – On the basis of purely consequentialist ethics, the main criterion for moral evaluation of the situation is the number of casualties; so, the only logically justified decision is to shoot. – On the basis of purely deontological ethics, advocating the unconditional prohibition of killing, the only logically justified decision is not to shoot. In both above cases, the dilemma does not arise; it would, however, have arisen if the unconditional obligation to save human lives were included in the catalogue of duties, postulated by deontological ethics.

The above example shows that, perhaps, the most morally significant dilemma we face is the choice of a system of ethical beliefs. This example is a thought experiment referring to historical events, but its logical structure is quite similar to the logical structure of a widely discussed problem of providing assistance to poor countries by rich countries. In the latter case, a moral dilemma does not arise if, according to the principles of justice, the right of any country to dispose of its national wealth is considered unconditional, and the postulate to help poor countries as supererogatory12, i.e. justified, but requiring actions which do not result from duty or obligation. The dilemma, however, will appear if the help is considered to be an unconditional moral obligation, colliding with the obligations implied by the principles of justice13. Regardless of whether we are dealing with a dilemma in the metaethical or colloquial sense, whether its source is a conflict of moral or non-moral values, it is inevitable that we are confronted with uncertainty in the process of its resolution. James Clark Maxwell has been attributed the observation that the ability to

11 B. Chyrowicz, O sytuacjach bez wyjścia, 2008, p. 200. 12 Supererogatory (from Latin: super = “beyond” + erogare = “to pay out”) means “exceeding what is due or asked”. 13 B. Chyrowicz, O sytuacjach bez wyjścia, 2008, pp. 110–112.

14.1 Moral dilemma

287

endure uncertainty is a good indicator of life maturity. From this point of view, ethical reflection gives us a unique opportunity for maturation because uncertainty is omnipresent in it. Uncertainty is a natural feature of modern life because we are attracted and motivated by many kinds of values which cannot be attained at the same time, and – when trying to attain at least some of them – we are not always able to confidently decide about what to do14. The source of difficulty is the necessity to confront moral and non-moral values, and to choose between incommensurable moral values (such as, for example, life and freedom) or between incompatible values (such as freedom and security). If the values “in play” are incommensurable, we can only rely on moral reflection, discussion and sometimes random search for the best solution in a given situation. Each time we must take full responsibility for the decision made, never getting confident that it was indeed the best decision: a wise man should trust his judgment and his decision, but – at the same time – be aware of the weakness of the reasons supporting that trust15. An important parameter of the decision-making process is the number of premises that should be included in this process. On the whole, the difficulty of decision-making increases with their number; although even two premises can generate a dilemma. On the whole, it is easier to make decisions on the basis of an ethical system referring to only one moral value or principle or to a set of values or principles which are arranged in a strictly hierarchical manner. The utilitarianism of Jeremy Bentham and the deontological ethics of Immanuel Kant are historical examples of such systems. Two-centuries-long critical debates over those systems, as well as everyday observation of current ethical discourse, have demonstrated insurmountable difficulties making us unable to agree, even a small social scale, on a selection and hierarchy of fundamental moral values. Hence the attractiveness of pluralist ethical approaches, referring to subsets of partially ordered values, based on the conviction that at least some values cannot be hierarchically ordered16. Already in 1976, the Polish historian of philosophy Władysław Tatarkiewicz (1886–1980), when outlining the pursuit of moral perfection in the Western tradition, noted that the ethics started to be more pluralistic in the mid-1970s than it had been before, and that inducing someone to strive for perfection seems just as inappropriate as reprimanding him for not doing so. There are some reasons why striving for moral perfection does not attract social sympathy: the pursuit of perfection quite frequently implies a sense of

14 cf. Z. Szawarski, “Moral Uncertainty and Teaching Ethics”, Proc. COMEST Third Session (Rio de Janeiro, Brazil, December 1–4, 2003), pp. 80–84. 15 ibid. 16 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 5.10.

288

14 Principles of moral decision-making

superiority and self-satisfaction – characteristic of an attitude which is sometimes called Phariseeism. Everyday experience is showing that this egocentric attitude gives, overall, worse moral and social results than allocentric attitude based on kindness for others rather than on self-perfection.17 Ethical pluralism does not necessarily lead to moral relativism. However, in its radical version, denying the possibility of introducing any order among moral values, it is equivalent to the view that all values (and consequently – all moral principles) are relative, i.e. dependent on the place, time and culture, and even on personal tastes. The latter view is undermining morality because it means that there are many sets of moral principles, and none of them is superior to any other of them; ergo, it is not possible to judge in a morally problematic situation which of possible actions is worse or better.18

14.2 Selected patterns of ethical thinking 14.2.1 General scheme of moral decision-making Each of our decisions, having a non-trivial impact on other people and the environment, is morally significant. Therefore, it usually requires reflection referring to certain moral premises: moral norms, past experience related to decisions of a similar nature, and sometimes also to ethical theories. Almost one hundred years ago, the American philosopher and educator John Dewey pointed to a similarity between designer’s reasoning in engineering and reasoning aimed at making morally significant decisions19 – the similarity which is today quite frequently referred to by the authors of books on engineering ethics20. Neither the design of technical objects nor the design of ethical decisions can be fully algorithmised: the heuristic (creative) element of the design process plays an important role in both cases. The decisionmaking procedure, resulting from this analogy may be outlined as follows: – formulate the problem; – collect factual information relevant to this problem; – identify options for solving this problem; – identify ethical values and principles that may be relevant to solving this problem;

17 W. Tatarkiewicz, O doskonałości, Państwowe Wydawnictwo Naukowe, Warszawa 1976, p. 41. 18 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 2.14. 19 B. A. Fuchs, F. L. Macrina, “Ethics and the Scientist”, 2005 (3rd edition), pp. 53–68. 20 e.g.: C. Whitbeck, Ethics in Engineering Practice and Research, Cambridge University Press, New York 2011 (2nd edition), Chapter 3.

14.2 Selected patterns of ethical thinking

289

– evaluate the options for solving the problem from the point of view of these values and principles; – choose the best option, and if it is impossible, an acceptable one.21 The factual information is important for the identification of options for solving the problem, but it plays only auxiliary role in the moral assessment of these options; its overvaluation at this stage could lead to situational relativisation of the decision, depriving it of any moral sense. If the practical implementation of the described scheme is concerned, there are two main difficulties: – the uncertainty of the identified options for solving the problem, implied by the incompleteness and uncertainty of factual information; – the lack of a well-defined (especially quantitative) criterion for evaluation of these options, which would allow to unambiguously choose the best (or acceptable) among them. In the process of making morally significant decisions, one can refer to an ethical theory (the theory-based approach), to ethical principles and rules (the principle-based approach), or to the results of the analysis of specific cases (the case-based approach). For centuries, no consensus has been reached as to which ethical argumentation method is the most appropriate from a methodological point of view, and which one implies the best decisions in terms of their a posteriori long-term evaluation. The practice of everyday life, also in the sphere of scientific research, shows that all these methods are applied, with a better or worse result, both separately and in various combinations. There is nothing wrong in trying to look at a planned or completed action from the point of view of the categorical imperative, from the point of view of the obligations towards the family, or from the point of view of its possible consequences. There is nothing wrong in trying, at the same time, to remind how similar problems were solved years ago by an uncle considered to be a moral authority among the family members. The problem begins when the real intention of such an analysis is not optimisation of a decision under consideration, but manipulation of one’s own or neighbour’s consciousness, aimed at justification of a morally wrong decision. Philosophers most willingly refer to ethical theories which allow the fullest and deepest justification of the decision under consideration. The use of those theories in everyday life or research practice can be, however, problematic because after 25 centuries of debates, no agreement has been reached as to which of them is the best. Moreover, their deeper understanding remains on the whole beyond the intellectual reach of Mr. Average or even Dr. Average. The generality

21 M. W. Martin, R. Schinzinger, Introduction to Engineering Ethics, McGraw Hill, New York 2010 (2nd edition), pp. 30–32.

290

14 Principles of moral decision-making

of a theory is its unquestionable formal advantage, but it can be also its practical disadvantage because the generality of concepts is, as a rule, a source of great interpretational variability and uncertainty. It is enough to recall the spectrum of interpretations of such basic metaethical concepts as “good” or “value” (cf. Chapter 12). If we have and accept an ethical theory comprising a complete, consistent and hierarchically ordered set of general moral principles, then making a decision in any specific case could be reduced to carrying out logically correct reasoning. Unfortunately, no such theory is available, and therefore we have to supplement the general principles with detailed norms pertaining to some special situations. This is a source of difficulties growing with the number of those detailed norms when it becomes increasingly difficult to ensure their coherence and logical consistency of reasoning; this is the source of frequently expressed doubts and scepticism about the usefulness of ethical theories. Example 14.5: The code of ethics, issued in 1992 by the Association for Computing Machinery22, contains the following two injunctions: “Avoid harm to others” (Section 1.2) and “Respect the privacy of others” (Section 1.7). It does not, however, provide any guidance for decision-making in the case when doing harm to a person A can only be avoided by disrespecting the privacy of the person B.23

The doubts and discouragement may be also provoked by the observation that – even in a set of principles and rules derived from a single coherent ethical theory – it is very difficult to indicate any significant moral principle or rule which could be applied without any exceptions24. We kill a terrorist who threatens hundreds of pupils imprisoned in a school building; we consider lying during tortures, inflicted by an enemy, as an act of heroism; we steal medicine from a pharmacy when it is the only way to save our child. The methodology of making decisions, referring to a chosen ethical theory and corresponding moral principles, does usually work in standard situations, but it is seldom sufficient for making decisions in unprecedented and unforeseen situations. Just as the discovery of a new elementary particle may reveal the inadequacy of the established and generally accepted theory of elementary particles, a new life situation may reveal the weakness of a relevant ethical theory, and an attempt to use this theory, despite doubts, may cause harm to a particular person. Mr. Average or even Dr. Average expects, therefore, clear and simple guidelines for decision-making rather than a sophisticated theory; hence the popularity, especially in

22 ACM Code of Ethics and Professional Conduct, Association for Computing Machinery, 1992, http://www.acm.org/about/code-of-ethics [2018-01-16]. 23 cf. J. van den Hoven, “Moral Methodology and Information Technology”, [in] The Handbook of Information and Computer Ethics (Eds. K. E. Himma, H. T. Tavani), Wiley & Sons, Hoboken (USA) 2008, pp. 49–67. 24 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 3.2.

14.2 Selected patterns of ethical thinking

291

the USA and other Anglo-Saxon countries, of attempts to solve new problems by studying similar cases resolved in the past, i.e. to act in a similar way as English judges who refer to the precedent sentences of higher courts when ruling new cases. Advocates of such an approach, which is sometimes called casuistry 25, consider any attempt to search for universal ethical principles as doomed to failure; they argue that good moral decisions may be made only by referring to the specific circumstances of a problem under consideration and to the precedents known from the past 26. The casuistry is considered to be particularly useful in resolving disputes whose participants do not share common ethical conceptions. The only objective to be attained then is to agree on the rightness of a particular action under given circumstances; such consent is often possible even among people of different theoretical convictions and worldviews27. If two cases are identical, they should be treated in the same way; if they are analogous – in an analogous way. The obvious weakness of basing decisions on conclusions resulting from the analysis of analogous cases, described in the literature, is the lack of clear-cut criteria for recognising cases as analogical, especially when many premises of subtle nature are to be taken into account in the decisionmaking process, as it often happens in the practice of scientific research. Moreover, recourse to a precedent is generally perceived as a very weak justification for the decision, especially in the continental Europe. The methodology of decision-making, referring to the elements of various ethical theories and related moral norms, seems to be free from the above-discussed disadvantages of two alternative approaches. It is, however, also criticised: on the one hand – as an eclectic solution, combining elements of various ethical theories without proper justification; on the other – as an intellectual tool being too abstract for practitioners. In the case of research ethics, the first objection is softened by the argument that the core ethical principles that guide science, such as objectivity and honesty, cannot be questioned on the grounds of any major ethical theory, and the differences of opinion concerning peripheral principles, even those related to protection of intellectual property, do not threaten the identity and survival of science institutions. The second objection, in turn, is mitigated by supplementing abstract principles with a set of appropriate rules and instructions for dealing with certain classes of situations or problems.

25 from Latin casus = “case”. 26 J. van den Hoven, “Moral Methodology and Information Technology”, 2008 p. 54. 27 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 5.4.

292

14 Principles of moral decision-making

Example 14.6: On the website of Markkula Center for Applied Ethics, one can find a guide for making ethical decisions, which refers to various ethical theories28. It postulates to analyse and evaluate available options for solving a moral problem by using the following ethical systems: utilitarianism (Utility Test), ethics of rights (Rights Test, Exceptions Test and Choices Test), ethics of justice (Justice or Fairness Test), ethics of the common good (Common Good Test) and virtue ethics (Virtue and Character Test). The weakest point of that methodology of decision-making is a scheme for aggregation of the results of individual tests (How to Compare Conclusions from the Different Approaches).

14.2.2 Selected tools of ethical thinking Conscience. According to many ethical orientations, especially of religious provenance, conscience is the basic tool for ethical decision-making. It is understood as an inner voice that encourages us to do good and to avoid evil, and enables us to assess our actions and conduct. The sanction for not following the recommendations of this voice is guilt occurring at the moment of realising the discrepancy between our behaviour and accepted norms. So, conscience is the ability of the human intellect to learn about moral good and evil, as well as about moral norms. Although the theoreticians of conscience are divided in explaining its origin (God, biological evolution, upbringing), they agree that it works in a somewhat automatic way, translating general norms into concrete conclusions regarding the evaluation of a given unique situation. Conscience is thus a kind of ethical intuition29 which makes it possible to decide without referring to such methods of discursive cognition as conceptualisation, reasoning and judging. Golden rule. Many moral problems can be avoided by sticking to the golden rule: “Do not do to others what you would not have them do to you” (cf. Subsection 12.2.7). Certainly, we would not like to read scientific articles that contain false data; we would not like to listen to conference presentations infected with “propaganda of success”; we would not like to find, in someone else’s article, our own drawing without information about its origin; we would not like to be an author of a manuscript which has not been published because of the reviewer’s ignorance; we would not like to discover that a research project, that we described in a grant proposal, is implemented by a laboratory where the reviewer of this proposal is employed. Such a list could be the shortest textbook of research ethics – the shortest and at the same time sufficient for dealing with many morally demanding situations a scientist is confronted with.

28 Ethical Decision Making, The Markkula Center for Applied Ethics, Santa Clara University, http:// www.scu.edu/ethics/practicing/decision/ [2018-01-15]. 29 from Latin intuitio = “insight”.

14.2 Selected patterns of ethical thinking

293

Primum non nocere maxim. The Latin maxim primum non nocere, which means “first, do no harm”, is attributed to the Greek physician Hippocrates of Kos. It can be considered one of the guiding principles of all ethics, as avoiding harmful behaviour is a moral duty not only for medical doctors but also for engineers, teachers and politicians; in fact, it is everyone’s duty – both in public and private life. It is only after fulfilling the requirement of non-maleficence that we may think about how to contribute to the proliferation of good. Let’s note that this ethical minimalism, resulting from the priority of proscriptions over prescriptions, is characteristic of many ethical systems of deontological nature; the Decalogue, for example, consists of eight prohibitions and two precepts, one of which concerns respect for parents, and the other, celebration of holidays. On the other hand, the systems of consequentialist ethics recommend taking actions that lead to the best effects – such as an increase in prosperity, happiness or well-being of a community – which may be achieved not only by avoiding harm but also by reducing suffering and limiting harm30. Metaphor of slippery slope. Arguments referring to a metaphor of slippery slope bear a warning against taking and accepting acts (the top of a slippery slope), which can lead – through a sequence of similar acts – to the approval of such acts that we would never have given consent to (the bottom of a slippery slope). This argument is based on an observation that any, even very insignificant, departure from the rules creates a precedent that can be referred to in the future as a kind of justification for the withdrawal from them. Even a incidental withdrawal from a moral rule attenuates resistance against a next withdrawal from it, and sometimes opens the way to questioning a more general principle from which this rule follows. Example 14.7: The author of the book 101 Ethical Dilemmas illustrates the slippery-slope argumentation with a story about how dodo birds died out in the sixteenth century on the islands of the Indian Ocean. Starving Dutch sailors decided to capture a single bird to satiate hunger. It turned out to be easy, and there were many birds on the islands. Their population, however, started to diminish quickly: if the hunger had justified the capture of one of them, why would not have justified the capture of the second, of the third. . .31

Scientific research contributes to the progress of our civilisation, understood as the appearance and dissemination of new everyday objects and large technical infrastructures, new methods of healthcare and other ways to improve the quality of life. The barrier of initial mistrust and caution is today overcome very quickly, quite often – too quickly. There is no good reasons to forbid a company C from producing a face cream containing nanoparticles, when such creams are already

30 J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, 2007, Section 3.4. 31 M. Cohen, 101 Ethical Dilemmas, 2007 (2nd edition), p. 146.

294

14 Principles of moral decision-making

offered by the companies A and B. Already forty years ago, the German-American philosopher Hans Jonas (1903–1993), in his 1979 book Das Prinzip Verantwortung32, pointed out that individual actions – seemingly “innocent” because they passed, for example, the test of the categorical imperative – can cause, after massification, a long-lasting accumulation of negative effects up to the level of endangering the human existence on Earth33.

14.3 Decision theory versus ethical thinking The processes of decision-making in engineering design, in setting up scientific experiments, in production management, as well as in planning military operations, are increasingly supported by computational tools referring to the outcomes of decision theory, game theory, optimisation theory and machine learning. The corresponding methodologies of decision-making can help in rationally choosing actions aimed at attaining an assumed goal, but they cannot answer the question whether this goal has been chosen correctly from moral or pragmatic point of view. Their effectiveness significantly depends on the adequacy of semantic and mathematical models describing the problem to be solved, on the measurability of quantities referred to by those models and on the accuracy of measurement of these quantities. Even if those methodologies of decision-making are applied to physical objects, whose mathematical modelling and measurement is most advanced today (cf. Chapter 5 and Chapter 6), the outcomes may be very uncertain; their applicability in solving moral problems is, therefore, rather problematic. It seems, however, that they can be at least a source of inspiration for making decisions related to such problems. A series of examples, illustrating an elementary approach to decision support in engineering, is provided in this subsection to corroborate this supposition34. It is demonstrating in a comprehensible manner the usefulness of multi-criterial optimisation (also known as multi-objective programming or Pareto optimisation) for this purpose.

32 an English translation: H. Jonas, The Imperative of Responsibility: In Search of an Ethics for the Technological Age, University of Chicago Press, Chicago 1985. 33 T. Morris, Hans Jonas’s Ethic of Responsibility: From Ontology to Ecology, State University of New York Press, Albany 2013, pp. 18, 33, 125. 34 These examples are referring to the author’s M.Sc. thesis: R. Z. Morawski, Badanie zależności górnej częstotliwości granicznej pierścieniowej piątki liczącej od pobieranej mocy zasilania, Instytut Radioelektroniki PW, Warszawa 1972.

14.3 Decision theory versus ethical thinking

295

Example 14.8: The most important performance indicators characterising an electronic pulse counter are the following: its limit frequency f , i.e. the maximum pulse frequency at which counting is still correct, and the power consumed by the counter P. They both depend on the values of the parameters of elements the counter is composed of; let’s assume that the most important of them are R and C. In Figures 14.1 and 14.2, the dependence of f and P on R and C – determined empirically for a certain structure of the counter – are shown; in Figures 14.3 and 14.4, the cross-sections facilitating the analysis of these relationships near the maxima of the function f ðR, C Þ are provided. Those figures may be used for designing a counter with a maximum limit frequency, under an assumption that its power consumption cannot exceed a predefined value P0 . If we allow P > 1.72 mW, the optimal solution will be R = 4.0 kΩ and C = 2.0 pF; however, when we require that P < 1.6 mW, a better solution will be R = 2.0 kΩ and C = 4.0 pF.

200

f [MHz]

180 160 140 120 5 4

4

3 2 C [pF]

1

2 1

5

3 R [kΩ]

Figure 14.1: Dependence of the limit frequency of the counter on the parameters of its elements.

2

P [mW]

1.8 1.6 1.4 1.2 5 5

4

4

3 2 C [pF]

1

2 1

3 R [kΩ]

Figure 14.2: Dependence of the power consumed by the counter on the parameters of its elements.

296

14 Principles of moral decision-making

C = 2 pF C = 4 pF

200 190

f [MHz]

180 170 160 150 140 130 120 1

1.5

2

2.5

3.5 3 R [kΩ]

4

4.5

5

Figure 14.3: Cross-sections of the function from Figure 14.1.

5

Figure 14.4: Cross-sections of the function from Figure 14.2.

2 C = 2 pF C = 4 pF

1.9 1.8

P [mW]

1.7 1.6 1.5 1.4 1.3 1.2 1

1.5

2

2.5

3 3.5 R [kΩ]

4

4.5

In the above example for selecting the best design, a simple version of mono-criterial optimisation has been used: it is referring to a scalar criterion and a scalar constraint, both defined by real-valued functions of two real-valued variables. In more general versions of mono-criterial optimisation, the criterion is a scalar function of an N-dimensional vector v ≡ ½v1 ... vN T , and the constraint is defined by means of a vector of scalar functions of v; each decision variable vn (n = 1, ..., N) may be real, integer or complex. The choice of the criterion and constraint determines the practical effectiveness of optimisation. The decision variables represent the most important factors affecting the quality of functioning of the object under design; so, they should reflect our notion of what would be the most desirable outcome of the design process, and – at the same time – should be closely related to a mathematical model

14.3 Decision theory versus ethical thinking

297

of that object – adequate to the needs and identifiable on the basis of available measurement data. On the other hand, the optimisation constraints are equations and inequalities which contain information on the limitations of resources (time, money, materials, etc.) that can be used for implementation of the design, and on the limitations implied by the laws of nature that exclude certain operations (such as production of a negative mass). On the whole, numerical implementation of mono-criterial optimisation itself is no longer a practical problem today because there are available high-speed computers and rich libraries of optimisation algorithms which work reliably if the number of decision variables and the number of maxima or minima of the criterion are moderate; difficulties may arise when the dimensionality of the optimisation problem increases beyond certain limits. Most important are, therefore, logical difficulties related to the formulation of that problem, i.e. difficulties related to the definition of the criterion and of the constraint. Example 14.9: The cost of production of the counter from the previous example is influenced by the price of elements used for its construction, and the latter depends on the required precision of their parameters R and C, i.e. on their scattering due to the imperfection of the manufacturing process (greater scattering implies lower precision and lower price). If for economic reasons, we decide to use elements whose parameters may differ up to 10% from their nominal values, then: – the choice of R = 4.0 kΩ and C = 2.0 pF will be risky, even if P > 1.72 mW is allowed, because for the extreme values of R, 3.6 kΩ and 4.4 kΩ, the limit frequency will be only 125 MHz; – the choice of R = 2.0 kΩ and C = 4.0 pF will de-recommended because the limit frequency will then always exceed 165 MHz (the values read from Figure 14.3).

Another difficulty related to the use of optimisation results in practice is illustrated in Figure 14.5. The dashed lines indicate the dispersion of the frequency cross-sections caused by the scattering (or instability) of the parameters of other elements of the

220 C = 2 pF C = 4 pF

200

f [MHz]

180

160

140

120 1

1.5

2

2.5

3 3.5 R [kΩ]

4

4.5

5

Figure 14.5: Dispersion of the cross-sections of the dependence of the limit frequency on the parameters of the elements of the counter.

298

14 Principles of moral decision-making

counter, which were neither included in the criterion nor in the constraints. In this case, the choice of R = 2.0 kΩ and C = 4.0 pF will be a better solution from the manufacturing point of view than the choice of R = 4.0 kΩ and C = 2.0 pF. It should be noted that there is some arbitrariness in the inclusion of a portion of information about the problem in the criterion or in the constraint. In particular, it is possible and expedient sometimes to swap an element of the constraint and the criterion, or to transform the problem of constrained optimisation into a problem of unconstrained optimisation. Example 14.10: The problem of mono-criterial optimisation, analysed in the previous examples, can be reformulated as a problem of unconstrained optimisation, e.g. by defining a new criterion sðR, C Þ as the ratio of the limit frequency f ðR, C Þ that we would like to maximise, and the consumed power PðR, C Þ, that we would like to limit. The graph of sðR, C Þ is shown in Figure 14.6.

s [MHz/mW]

180 160 140 120 5 5

4

4

3 2 C [pF]

1

2 1

3 R [kΩ]

Figure 14.6: Dependence of the ratio of limit frequency and the power consumed on the parameters of the elements of the counter.

The above-outlined methods of mono-criterial optimisation, quite effective if applied for solving decision-making problems related to physical phenomena, have turned out to be insufficient for solving real-world economic, social or psychological problems – mainly because mathematical modelling of economic, social and psychological phenomena is much more uncertain, both quantitatively and qualitatively, than modelling of phenomena described by classical physics. For a century, those difficulties have been a driving force of the development of new methods for multi-criterial optimisation, referring to uncertain and incomplete information on the problem to be solved. Roughly speaking, these methods consist in ranking the candidate solutions, rather than in finding a single optimum optimorum solution. They enable, therefore, the decision-maker to choose the final solution after taking into account some

14.3 Decision theory versus ethical thinking

299

additional criteria which for formal reasons or technical difficulties could not be directly included in the optimisation process. These methods are used in the decision support systems dedicated to decision-making under uncertainty, when even the probability of the effects of optional decisions is unknown. In such cases, the classic statistical methods fail, and the use of a Bayesian approach turns out to be advantageous, also when applied to moral decision-making. More insightful reflection shows, however, that the classic scheme of mono-criterial optimisation with constraints can be at least inspiring in this area – especially if it is enhanced with some mechanisms for accommodation of uncertainty. It seems that this classic scheme can be a platform for integration of three basic approaches to ethics, i.e. consequentialist ethics, deontological ethics and ethics of virtue. The most general scheme of this integration results from the analogy linking: – effects of decisions with optimisation criteria; – moral duties with optimisation constraints; – virtues of decision-makers with numerical advantages of optimisation algorithms. The above analogy is an intuitive hint behind the following pattern for making morally significant decisions: choose a solution which is the best in the sense of its anticipated consequences, but search only in a subset of admissible solutions, determined by moral duties. This analogy should, moreover, encourage decisionmakers to keep improving their skills by developing and cultivating moral and intellectual virtues that support those skills. Let’s take a closer look at the components of this pattern of thinking. The most obvious criteria for optimising morally significant decisions were proposed by the founders of ethical utilitarianism: the calculus of pleasure or happiness evaluated over a certain human population and integrated over an interval of time. Despite the tendency to extend these criteria on the global population without any time limit, it would be still difficult to indicate examples of actions oriented on all people belonging to both living and future generations. The fundamental limitation of such criteria results from the lack of measures that could be used for comparison of various sensations experienced by a single person, not to mention sensations experienced by different persons at different times. The resulting helplessness makes decision-makers and analysts to refer to substitute criteria, such as the income per capita, the number of publications per scholar or the percentage of miners satisfied with their professional privileges. The criteria referring to such abstract values as justice or freedom are even more problematic because, on the whole, it is very difficult to quantify the influence of specific decisions on their realisation. Even if we agree about certain measures of happiness, justice or freedom, a question arises how to properly use them in the definition of a criterion to be applied for decision optimisation: the value of a certain decision effect, expressed by means of such measures, should be somehow weighted by the probability of this effect, while estimation of that probability is,

300

14 Principles of moral decision-making

as a rule, very uncertain. The latter problem is subject to comprehensive analysis in the book Moral uncertainty and its consequences authored by the American philosopher Ted W. Lockhart (*1952)35. Example 14.11: Let’s assume that two possible decisions, D1 and D2, satisfy the requirement of moral righteousness in a comparable degree, but the risk of failure to achieve the intended effects is greater for the decision D2. The choice of D1 is obvious in this case36. It will not be evident, however, if D1 and D2 differ in the degree of satisfying the requirement of moral righteousness. The question arises then: “Is it better to choose D1 which may bring less positive outcome with higher probability, or D2 which may bring more positive outcome with lower probability?” The answer depends, of course, on the quantitative characteristics of the effects of the decisions D1 and D2, as well as on the probability of their appearance and a way of weighing them, considered to be most appropriate37.

The Ted W. Lockhart’s book is rather controversial – mainly because, on the whole, it is impossible to estimate the probabilities of events with an accuracy of two significant digits, while such an accuracy is referred to in the examples provided therein. On the other hand, however, these examples show how much the outcome of the decision-making process depends on the values of those probabilities, and thus how fragile are the foundations of rationality which ignores them. It seems, therefore, that thinking patterns, suggested by Ted W. Lockhart, can play some inspiring role in our attempts to deal with the uncertainty of moral decisions. The quantitative uncertainty of measurement may be reduced by averaging redundant results of measurements (cf. Section 10.5). Per analogiam, collective decision-making is a method for reducing qualitative uncertainty of significant decisions at the state level, at the institutional level or even at the family level. It turns out, however, that the efficacy of this method may be quite limited when applied to morally significant decisions by the boards of experts in a relevant field. Example 14.12: A proposal of a fake experiment into the neurobiology of social behaviour had been sent for approval to 43 Canadian research ethics boards. The protocol of the planned experiment included the following assumptions: – some of the participants have a history of disturbed social behaviour; – their brains are scanned while they perform a series of tasks; – one of their tasks involves viewing violent images. On the basis of similar ethical norms, the boards asked for approval responded differently: 30 rejected the proposal, 10 approved it with qualifications, and 3 accepted it unconditionally. It would be unrealistic to expect all boards to reach the same decision, but the dispersion of answers in this case shows that ethical norms are being applied in worryingly different ways. It should be,

35 T. W. Lockhart, Moral Uncertainty and its Consequences, Oxford University Press, New York – Oxford (UK) 2000. 36 ibid., p. 4, 12. 37 ibid., pp. 33, 41, 45, 145–147.

14.3 Decision theory versus ethical thinking

301

however, noted that a more common problem with research ethics boards is not a willingness to allow dubious studies, but rather an excessively cautious approach resulting from ignorance of the experimental methods involved.38

The uncertainty of morally significant decisions depends on many factors related both to the subject-matter and to the decision-maker. These are, in particular: – the complexity of the subject-matter, and the availability and uncertainty of relevant knowledge; – the ethical orientation, life experience and intellectual skills of the decisionmaker. There are also less “rational” factors, such as emotional profile of the decisionmaker or his health condition. Both intellectual performance and emotional states of that person may be modulated not only by genetic predispositions or somatic ailments but also by the composition of intestinal flora and presence of manipulative parasites in the nervous system. Example 14.13: It has been known for decades that variations and changes in the composition of the intestinal flora influence normal physiology of a human organism and contribute to diseases ranging from inflammation to obesity. The recent studies indicate, moreover, that they also influence the central nervous system – possibly through neural, endocrine and immune pathways – and thereby the brain function and behaviour.39

Example 14.14: Toxoplasma gondii represents perhaps one of the most convincing examples of a manipulative parasite of vertebrates. It is a kind of protozoan capable of infecting all warm-blooded animals and human beings. Members of the cat family are the only definitive hosts, within which this parasite undergoes full gametogenesis and mating within the intestinal epithelium. It appears to cause a range of behavioural alterations across host species. The authors of a 2013 article40 hypothesise that infection with Toxoplasma gondii may be a causal factor contributing to the appearance of psychic disorders such as schizophrenia. The author of another 2013 article41 is explaining that various parasites can affect host behaviour by interfering with the normal neural communication, by secreting substances that directly alter neuronal activity via nongenomic mechanisms or by inducing genomic- or proteomic-based changes in the brain of the host. Parasites typically

38 J. de Champlain, J. Patenaude, “Review of a Mock Research Protocol in Functional Neuroimaging by Canadian Research Ethics Boards”, Journal of Medical Ethics, 2006, Vol. 32, No. 9, pp. 530–534. 39 J. F. Cryan, T. G. Dinan, “Mind-Altering Microorganisms: The Impact of the Gut Microbiota on Brain and Behaviour”, Nature Reviews ‘Neuroscience’, October 2012, Vol. 13, pp. 701–712. 40 J. P. Webster, M. Kaushik, G. C. Bristow, G. A. McConkey, “Toxoplasma Gondii Infection, from Predation to Schizophrenia: Can Animal Behaviour Help Us Understand Human Behaviour?”, Journal of Experimental Biology, 2013, Vol. 216, No. 1, pp. 99–112. 41 S. A. Adamo, “Parasites: Evolution’s Neurobiologists”, Journal of Experimental Biology, 2013, Vol. 216, No. 1, pp. 3–10.

302

14 Principles of moral decision-making

induce a variety of effects in several parts of the brain, which, however, cause only very selective changes in the host behaviour. Several studies have indicated that the Toxoplasma gondii can affect people’s personality by slowing down their reaction or making them more likely to take risks42. A 2017 article43 is reporting some studies that have shown an association between the prevalence of that parasite and the incidence of neurodegenerative disorders such as the Parkinson’s disease and the Alzheimer’s disease.

As a rule, morally significant decisions are undertaken in a social context since they refer to at least one member of society other than the decision-maker. The decisionmaking process may be, therefore, influenced by that social context in various ways: by persuasion, advice, suggestion, manipulation, etc. The influence of moral and non-moral authorities has played a significant role form the very appearance of morality. . . Example 14.15: In the period 1961–1963, the American psychologist Stanley Milgram (1933–1984) carried out a series of experiments aimed at the evaluation of the willingness of people to obey an authority figure who instructed them to perform acts conflicting with their personal conscience. Participants were led to believe that they were assisting an unrelated experiment as “teachers”, in which they had to administer electric shocks to “learners”; those fake electric shocks, whose magnitude was gradually increased, would have been fatal had they been real. The experiment demonstrated that a majority of participants would fully obey the instructions, regardless of consequences44. Various versions of the Stanley Milgram’s experiments have been repeated many times around the globe, recently at the University of Wrocław in Poland45, with fairly consistent results.

Let’s now consider constraints the optimisation of moral decisions is subject to. They are imposed mainly by the recognised and accepted moral precepts and prohibitions, especially by our moral obligations with respect to others, including those implied by their rights. They may also result from particularly treasured values, such as freedom and justice, which have not been taken into account in the definition of the optimisation criterion. After considering all constraints, it may turn out that the set of admissible decisions is empty; it is

42 cf. A. Parlog, D. Schlüter, I. R. Dunay, “Toxoplasma Gondii-Induced Neuronal Alterations”, Parasite Immunology, 2015, Vol. 37, pp. 159–170. 43 H. M. Ngô, Y. Zhou, H. Lorenzi, K. Wang, T.-K. Kim, Y. Zhou, K. E. Bissati, E. Mui, L. Fraczek, S. V. Rajagopala, C. W. Roberts, F. L. Henriquez, A. Montpetit, J. M. Blackwell, S. E. Jamieson, K. Wheeler, I. J. Begeman, C. Naranjo-Galvis, N. Alliey-Rodriguez, R. G. Davis, L. Soroceanu, C. Cobbs, D. A. Steindler, K. Boyer, A. G. Noble, C. N. Swisher, P. T. Heydemann, P. Rabiah, S. Withers, P. Soteropoulos, L. Hood, R. McLeod, “Toxoplasma Modulates Signature Pathways of Human Epilepsy, Neurodegeneration & Cancer”, Scientific Reports, 2017, Vol. 7, No. 1, pp. 1–32 of Art #11496, https://doi.org/10.1038/s41598-017-10675-6 [2018-01-14]. 44 S. Milgram, Obedience to authority: an experimental view, Harper & Row, New York 1974. 45 D. Doliński, T. Grzyb, Posłuszni do bólu, Wyd. “Smak Słowa”, Sopot 2017.

14.3 Decision theory versus ethical thinking

303

a moral dilemma understood as an impasse situation. It may turn out that this set contains only a single solution, and no optimisation is needed. However, if it contains at least two solutions, then the best of them must be chosen on the basis of the adopted criterion. At this point, the question arises: “What is the role of the decision-maker’s virtues in the presented decision-making scheme?”. It would be a question of secondary importance if moral values could be measured using methodologies applied for measuring basic physical quantities, and if our predictive ability were unlimited; but, it is not the case. The intellectual abilities of the decision-maker, his accuracy in gathering information necessary for making a decision, his readiness to sacrifice his own material benefits for the public good, etc. – these are the factors which in a substantial way influence the identification of the moral problem, the formulation of a decision-making task in the language of constrained optimisation, and finally, the mitigation of all the uncertainties that will arise in the process of solving it. The ideas regarding the use of the elements of decision theory in ethics, presented in this subsection, are neither new nor singular. The analysis of the risk, accompanying the implementation of new technoscientific achievements, is making us aware of the necessity to include the issue of risk assessment into ethics46. In fact, the literature on research ethics for several decades has insisted on linking risk to technoscientific development. Now, the risk is considered to be a morally relevant factor because the concept of risk is almost synonymous with the concept of danger; so, fighting various kinds of danger is getting to be a major function of technology47. The development of decision-making models taking into account the risk and uncertainty should become, according to the Swedish philosopher Sven O. Hansson, a fundamental task for moral philosophy48. Till now, the role of moral philosophy has consisted mainly in dealing with human behaviour in welldefined situations, and the role of decision theory – in dealing with rational behaviour in an uncertain environment. This traditional division of roles seems to be anachronic because it leaves ethical risk-related issues beyond the scope of interest of both disciplines. On the one hand, we have the right for security, but on the other, the functioning of modern societies is not possible without taking risks; hence the key importance of risk issues for ethics49.

46 S. O. Hansson, “An Agenda for the Ethics of Risk”, [in] The Ethics of Technological Risk (Eds. L. Asveled, S. Roeser), Earthscan-Sterling, London 2009; H. Zandvoort, “Requirements for the Social Acceptability of Risk-Generating Technological Activities”, [in] The Ethics of Technological Risk (Eds. L. Asveled, S. Roeser), Earthscan-Sterling, London 2009. 47 E. Agazzi, Right, Wrong and Science: The Ethical Dimensions of the Techno-scientific Enterprise, 2004, p. 146. 48 S. O. Hansson, “An Agenda for the Ethics of Risk”, 2009, p. 12. 49 ibid., p. 21.

304

14 Principles of moral decision-making

At least since the very beginning of the twenty-first century, numerous attempts to use the elements of decision theory in ethics have been motivated by the needs of robotics. The application of robots in healthcare and warfare is the source of important ethical concerns. The necessity to answer difficult questions – whether robots pose a threat to humans, whether some uses of robots are morally problematic or how robots should be designed to act “morally” – has contributed to the development of a new sub-field of ethics of technology, called robot ethics50.

50 An introduction may be found in: F. Amigoni, V. Schiaffonati, “Ethics for Robots as Experimental Technologies: Pairing Anticipation with Exploration to Evaluate the Social Impact of Robotics”, IEEE Robotics and Automation Magazine, 2018, Vol. 25, No. 1, pp. 30–36.

15 General issues of research ethics This chapter contains an overview of ethical issues related to a typical cycle of empirical research, comprising: – the definition of research goals and research methodology; – the formulation of hypotheses and exploration of sources of background information; – the design of experiments and their technical and logistic preparation; – the implementation of experiments and acquisition of raw experimental data; – the analysis, processing and interpretation of those data; – the publication of research results. This overview begins with a general characterisation of ethical violations in technoscience and their tentative aetiology (Section 15.2). Next, an outline of the evolution of research ethics over the last century is provided (Section 15.3) and ethical issues concerning the choice of research problems and research methodology are specified (Sections 15.4 and 15.5). The next two chapters cover, in more detail, the ethical aspects of experimenting, handling data and publishing research results.

15.1 Metaethical assumptions The main task of research ethics is to establish and justify rules (or principles) of morally good research practice – to be observed by each participant of any research process. These are, roughly speaking, the rules of research reliability, the rules of loyalty with respect to other participants of a research process and the rules of research productiveness. The research ethics does not apply directly to science, but to research activity, i.e. moral judgments refer, in principle, to research operations, not to scientific knowledge because generation of scientific knowledge is not only morally acceptable but also morally desirable. The qualifier “in principle” is to signal that this assumption raises understandable controversies in applied sciences whose results are quickly transferred to socio-economic practice. In the age of technoscience, when systemic pressure is exerted by political and administrative bodies on institutions and people of science to increase the practical effectiveness of research, it is difficult to defend the claim that there is no such truth whose pursuit would be morally unacceptable.

https://doi.org/10.1515/9783110584066-015

306

15 General issues of research ethics

Example 15.1: A research project aimed at finding a cheap and effective method of causing a global epidemic of an infectious disease1 in order to kill as many people as possible, should be considered morally unacceptable.

A scientist is responsible for fully conscious, free and intended acts – for both good and bad acts if he knows that they are good or bad; he is responsible not only for performing an act but also for its planning and its future consequences. The interpretation of this statement depends primarily on the answer to the question about the scope of our freedom (as discussed in Section 11.2). This interpretation was subject to significant evolution during the twentieth century, mainly due to the traumatic experiences associated with the accelerated transfer of research outcomes to socio-economic practice, and especially with their military applications. When the Nuremberg trials2 made the world aware of the atrocity of “scientific research” carried out by Nazi “scientists”, even the physicists involved in the development of atomic weapons during the war, Albert Einstein and Julius R. Oppenheimer, stand up for purely peaceful use of nuclear energy. This was not the only premise motivating researchers to revise the positivist postulate of axiological neutrality of science. In the second half of the twentieth century, the progressing globalisation of science and integration of research institutions with economic and political institutions abolished, step by step, traditional safeguards against the negative consequences of science in social life3. On the other hand, the massification of science, i.e. unprecedented increase of the number of researchers, inevitably lowered the standards concerning their qualifications: the door was opened for people with mediocre intellectual and moral qualifications, motivated by the prospects of easy social advancement rather than by cognitive curiosity. Both those factors, when combined with a general decline of the traditional ethical standards, contributed to a noticeable increase of the incidence and extent of research-related misconduct; a community of scientific argumentation – traditionally referring to the respect for truth and responsibility for information – started to disintegrate. The “optimists” are inclined to contest this statement, claiming that it is not the incidence of abuses that is growing, but our sensitivity to those abuses and the

1 A number of realistic options may be found in the Wikipedia article “List of Infectious Diseases” available at https://en.wikipedia.org/wiki/List_of_infectious_diseases [2018-06-11]. 2 for details, cf. the Wikipedia article “Nuremberg Trials”, available at https://en.wikipedia.org/ wiki/Nuremberg_trials [2018-06-11]. 3 cf. J. Ziman, “Why Must Scientists Become More Ethically Sensitive than They Used to Be?”, Science, 1998, Vol. 282, No. 5395, pp. 1813–1814.

15.1 Metaethical assumptions

307

availability of information about them. The results of fragmentary and local sociological studies on research misconduct, carried out with increasing intensity since the 1970s, mainly in the USA, are still not a sufficient proof in this matter. It seems, however, that we are about to make representative meta-analyses of partial results, and even initiate appropriate investigation on a global scale. In the meantime, it is safer to accept the hypothesis that we have to deal with a serious ethical problem and work on its solution. To realise an alarming scale of abuse in science, it is enough to make a simple internet search guided by the key terms like research misconduct or research fraud4. There are good reasons to believe that this is not a random perturbation in the development of technoscience but a regularity causally determined by bureaucratic mechanisms imbedded in the research management on a global scale. Those mechanisms drive the negative evolution of technoscience since they give better chance of “survival” to bad research practices than to good ones5. It is symptomatic that the instances of misconduct most frequently occur in research directly concerning human life.

Example 15.2: The Scientist, a professional magazine intended for life scientists, has been systematically publishing short messages about the cases of misconduct. Only in May 2018, the following messages appeared there: – “Research Scandal Involving Popular Heart Drug Engulfs Three More Papers” in the article “The scientists involved have hired lawyers to fight the conclusions of a recent investigation into some studies of Diovan in Japan” (May 4, 2018); – “Chief Academic Officer Accused in Ongoing Research Scandal at UCL” in the article “New allegations of fraud committed under the watch of geneticist David Latchman were made last year” (May 17, 2018); – “Scientist Who Received Millions from NIH Leaves Alabama Posts” in the article “An investigation finds 20 papers by Santosh Katiyar, who studied alternative treatments for cancer, include image manipulation” (May 24, 2018); – “Another Retraction for Discredited Researcher” in the article “Robert Ryan was forced to resign from the University of Dundee in 2016 following an investigation of misconduct” (May 25, 2018); – “USC President Steps Down in Wake of Gynecologist Scandal” in the article “An uproar over the university’s handling of sexual misconduct accusations led to C.L. Max Nikias’s resignation” (May 29, 2018).6

4 or Scientific Misconduct and Scientific Fraud. 5 P. E. Smaldino, R. McElreath, “The Natural Selection of Bad Science”, Royal Society Open Science, September 21, 2016, http://rsos.royalsocietypublishing.org/content/3/9/160384 [2018-04-16]. 6 cf. the results of search on the website of The Scientist, available at https://www.the-scientist. com/?articles.search/searchTerm/misconduct [2018-06-12].

308

15 General issues of research ethics

Example 15.3: An anonymous computer-based survey, addressed to the community of biomedical researchers and research managers, active within Belgian universities and industrial enterprises, was conducted in 2017. It was focused on 22 forms of research misconduct classified into six groups, viz.: data misconduct, methods misconduct, credit misconduct, policy misconduct, cutting corners misconduct and outside influence misconduct. The following general conclusions were drawn on the basis of 617 responses received from academia and 100 responses received from industry: – The prevalence of reporting admitted research misconduct (at least one of its 22 forms) was higher within universities (71%) than within industrial enterprises (61%). – The prevalence of reporting observed research misconduct (at least one of its 22 forms) was higher within universities (93%) than within industrial enterprises (84%). – The only form of research misconduct, which appeared more frequently at industrial enterprises, was plagiarism of ideas and of the form of their expression. – The estimated incidence of admitted and observed research misconduct proved to be comparable with that estimated in other surveys on research integrity.7

15.2 Typology and aetiology of research misconduct 15.2.1 Typology of research misconduct Increased interest in research misconduct appeared in the USA by the end of the 1970s; in the early 1990s, it entailed the formalisation of institutional procedures aimed at preventing its occurrence and the introduction of courses on research ethics to the graduate curricula offered by major academic institutions8. The perpetrators of gross acts of abuse, which may be qualified as instances of fabrication, falsification or plagiarism (labelled with the acronym FFP), are legally prosecuted in the USA on the basis of the relevant federal regulations which define those acts in a narrower but more precise way than the textbooks of research ethics9. In 2000, the US federal government adopted a uniform definition of research misconduct as “fabrication, falsification or plagiarism in proposing, performing or reviewing research, or in reporting research results”. According to the document “Federal Research Misconduct Policy”: – Fabrication should be understood as “making up data or results and recording or reporting them”.

7 S. Godecharle, S. Fieuws, B. Nemery, K. Dierickx, “Scientists Still Behaving Badly? A Survey Within Industry and Universities”, Science and Engineering Ethics, October 2, 2017, pp. 1–21. 8 cf. F. L. Macrina, “Methods, Manners, and the Responsible Conduct of Research”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 1–18. 9 ibid., p. 11.

15.2 Typology and aetiology of research misconduct

309

– Falsification should be understood as “manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record”. – Plagiarism should be understood as “the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit”.10 The research record in the definition of falsification should be understood as “the record of data or results that embody the facts resulting from scientific inquiry, and includes, but is not limited to, research proposals, laboratory records, both physical and electronic, progress reports, abstracts, theses, oral presentations, internal reports, and journal articles”. Since 2001 the American research institutions, interested in receiving federal funding for their research projects, have been obliged to incorporate the above-defined federal standard in their research misconduct policies; they can, of course, go beyond that standard. According to a 2015 study11, based on the analysis of such policies introduced by 198 institutions, ca. 60% institutions added other than FFP forms of misconduct, such as: – other serious deviations from accepted practices; – other deception involving manipulation of data or experiments; – misuse of confidential information obtained in peer review; – significant violations of legal regulations or policies pertaining to human or animal subjects of research, biosafety or radiation safety; – misappropriation of funds or property; – unethical authorship-related practices other than plagiarism; – misrepresentation of one’s credentials or qualifications; – failure to disclose significant financial interests; – inappropriate reaction to misconduct: retaliation for good faith misconduct allegations, interfering with a misconduct investigation, covering up misconduct, failing to report misconduct one knows about or making knowingly false or malicious misconduct allegations. A researcher is not only expected to refrain from dishonest behaviours listed above but also to be diligent: deviations from accepted practices, resulting from negligence and therefore called negligence mistakes, are also subject to ethical scrutiny and appraisal since their consequences may be not less harmful than those of FFP. A researcher may, for example, support a false hypothesis using falsified data, but he can also stick to a false hypothesis because of not making efforts to acquire data that could falsify it. This category of errors also includes unnecessarily solving a problem already solved – thus, wasting resources. This happens, most frequently, when 10 “Federal Research Misconduct Policy”, Federal Register, Office of Science and Technology Policy, December 6, 2000, Vol. 65, No. 235, pp. 76260–76264. 11 D. B. Resnik, T. Neal, A. Raymond, G. E. Kissling, “Research Misconduct Definitions Adopted by U.S. Research Institutions: Introduction”, Accountability in research, 2015, Vol. 22, No. 1, pp. 14–21.

310

15 General issues of research ethics

a researcher is not completing a careful review of sources of relevant information before approaching a new research problem. There is also a category of honest mistakes which are neither intentional nor caused by negligence. They result, as a rule, from the limitations of available resources (especially laboratory infrastructure) or from other objective circumstances hindering or preventing the performance of certain research operations in a manner consistent with the methodological standards accepted in a given scientific discipline. Those mistakes are generally subject to the mildest ethical qualification, but their evaluation should be very cautious: one should always make sure that they are not just a consequence or side-effect of a more serious mistake consisting, for example, in undertaking a project despite being aware of the lack of sufficient resources to complete it in accordance with relevant methodological standards. 15.2.2 Aetiology of research misconduct The intellectual, moral and psychological weakness of academic staff is the primary and most important cause of all forms and instances of research misconduct. Example 15.4: The authors of the 2018 paper “Nine pitfalls of research misconduct”12 coined the acronym TRAGEDIES for nine psychological and sociological factors that can lead scientists astray from the path of research integrity. In the following table, each of these factors is illustrated with an example quoted from that paper: Factor

Example of misleading reasoning

temptation

“Getting my name on this article would look really good on my CV.”

rationalisation

“It’s only a few data points, and those runs were flawed anyway.”

ambition

“The better the story we can tell, the better a journal we can go for.”

group and “The principal investigator’s instructions don’t exactly match the protocol authority pressure approved by the ethics review board, but she is the senior researcher.” entitlement

“I’ve worked so hard on this, and I know this works, and I need to get this publication.”

deception

“I’m sure it would have turned out this way (if I had done it).”

incrementalism

“It’s only a single data point I’m excluding, and just this once.”

embarrassment

“I don’t want to look foolish for not knowing how to do this.”

stupid systems

“It counts more if we divide this manuscript into three submissions instead of just one.”

12 C. K. Gunsalus, A. D. Robinson, “Nine Pitfalls of Research Misconduct”, Nature, 2018, Vol. 557, pp. 297–299.

15.2 Typology and aetiology of research misconduct

311

However, there are numerous factors related to the institutional conditions of science – in particular, “stupid systems” mentioned in the above example – which may influence the occurrence and structure of research misbehaviours; many of them are side-effects of the transformation of classical science into technoscience. First, the integrated system of academic and industrial science is more oriented on the research outcomes, especially those immediately applicable, rather than on the research and research process. The “business spirit”, floating over research institutions, is systematically strengthening the priority for non-scientific criteria in the assessment of research projects and for non-scientific methods of acting, such as marketing of research results or even social engineering13. In this situation, the number of potential sources of conflicts of interest is increasing dramatically, as well as the number of “rational” justifications for the deviation from the principle of objectivity in the interpretation of own research results and in the evaluation of those provided by others. There are “good” financial reasons in the background of those phenomena: the researchers, persuaded and manipulated by research administration, are more and more interested in protection of their intellectual property, which adversely affects the implementation of the principle of openness in science. Secondly, the transformation of the academic vocation into a profession, not very different from clerk or salesman professions, is progressing; the individual motivation for doing research is evolving accordingly – towards utilitarian and egoistic values. Today, the pursuit of notoriety, money and promotion is increasingly recognised by the scientific community as a core component of a scientific career. Vanity, traditionally considered a serious defect of human character, is increasingly recognised as a virtue in the academic world. Some scholars devote a lot of time to self-promotion, often involvement of the press, television and internet media. Since journalists not always are able to properly recognise self-advertisement performed by those scholars, a false image of science is created in the media: their contribution to scientific achievements is exaggerated, as well as the importance of those achievements. Similar misrepresentation of scientific achievements occurs, on almost everyday basis, in the proceedings related to academic promotions and awarding scientific degrees. Very many, too many, academic careers are built on the skilful use of formal systems of evaluation without making substantial research contribution to science. Massification of science – in the absence of demographic changes which would indicate a statistically significant increase of the share of scientifically talented people – must inevitably lead to an increase in the percentage of only moderately gifted employees of science. The latter focus their research on carefully selected secondary issues, corresponding to their moderate capabilities, because they are unable to

13 See the definition and exemplifications of this phenomenon, provided in the Wikipedia article “Social Engineering (security)” available at https://en.wikipedia.org/wiki/Social_engineering_(security) [2018-06-13].

312

15 General issues of research ethics

face the challenge of dealing with the most important ones. As a consequence, the share of insignificant research results is growing, and research institutions adapt their plans, infrastructure and policies to the needs of the mediocre majority of their employees. Research curiosity has stopped being the main motivating factor of scientific endeavours. Today’s researchers are predominantly motivated by good income and still relatively high social prestige – rather than by an internal drive to solve challenging cognitive problems. Example 15.5: The average annual salary of a full professor, affiliated at an American university of the first category14, reached in 2017 the level of USD 132,471, and his average compensation (i.e. the annual salary increased by incentives and benefits, such as group health care coverage, shortterm disability insurance and contributions to a retirement savings account) – USD 170,20115. At the same time, the average salary for American workers was USD 44,56416.

Already in 1974, the famous American physicist Richard P. Feynman noted that many researchers, whose motivation is of non-cognitive nature, are inclined to perform feigned scientific activity, i.e. to reproduce research “rituals” without understanding their purpose and meaning17. They have journals publishing unimportant articles which are needed only by their authors to increase the values of bibliometric indicators used for the institutional assessment of research performance. Thirdly, the bureaucratisation of the systems of research management eliminates the most predisposed individuals, i.e. those most interested in creative work, aware of their creative potential, endowed with the sense of observation and criticism, unwilling to opportunism and, in particular, to wasting time and resources on idle or even harmful formal activities, limiting the effectiveness of research in the name of its enhancement. The bureaucracy, associated with research management, is the main cause of the syndrome “publish or perish” which consists in overvaluation of publications since one cannot survive in science without extensive publishing. The bureaucracy is the causal background of the disappearance of sound and decent scientific criticism and, consequently, of the impunity of many “gurus” of science who, thanks to their functions performed in the management structures of technoscience, have been able to unlimitedly continue their imposter activity.

14 The American universities are split into five categories: category I (Doctoral), category IIA (Master’s), category IIB (Baccalaureate), category III (associate’s with ranks) and category IV (associate’s without rank). 15 “Vizualizing Change – The Annual Report on the Economic Status of the Profession, 2016–17”, Academe, March-April 2017, pp. 4–67, https://www.aaup.org/file/FCS_2016-17.pdf [2018-02-26]. 16 A. Doyle, “Average Salary Information for US Workers”, The Balance, February 14, 2018, https:// www.thebalance.com/average-salary-information-for-us-workers-2060808 [2018-02-26]. 17 R. P. Feynman, “Cargo Cult Science”, 1974.

15.3 Evolution of research ethics

313

Fourthly, the increasing complexity of research projects engenders qualitatively new problems related to their organisation and financing. Traditional hierarchical organisational structures of universities and non-academic research institutions, mapping the nineteenth-century classification of sciences, turn out to be inadequate to the needs of interdisciplinary research. Increasingly, a need for real (not only ideologically motivated) cooperation appears – the close cooperation among the departments of the same university, as well as among different academic and non-academic institutions. The scale of research projects is growing, both in terms of the number of researchers involved and in terms of their overall costs; for example, the European Union has allocated a billion euros for a 10-year project on studying the human brain18. There is a growing need to finance large projects from many sources at the same time. In the complex, networked and often partially or entirely virtual, structures of the interdisciplinary research, the responsibility for attaining their substantive goals is “blurred”, as well as the responsibility for controlling the quality of empirical data and their flow. In combination with insufficient preparation of scientific staff to deal with methodological and ethical problems of scientific research, this increases the probability of negligence mistakes, and often also creates an opportunity for research misconduct of more serious nature.

15.3 Evolution of research ethics Along with the accelerated development of science over the past 200 years, there have been evolved views on the research methodology and ethics. The traditional nineteenth-century model of science, still cultivated in some research institutions in the 1960s, appealed to an individual epistemic authority and to a conviction that science may provide absolutely reliable (certain) research results (scientific knowledge). When, however, the advancements in the field of physics and chemistry have revealed significant limitations of theories previously considered irrefutable, both those pillars of traditional research methodology and ethics must have been revised. Both of them have been, step by step, replaced with collective epistemic authority recognising without exception the uncertainty of research outcomes. Those changes in research methodology and ethics have been justified by Karl R. Popper in the following way: – Even in a narrow field of expertise, the amount of accumulated knowledge exceeds the individual capacity of oversight. – The errors cannot be avoided; they can appear even in the best-tested theories.

18 The website of this project is located at https://www.humanbrainproject.eu/en/ [2018-02-27].

314

15 General issues of research ethics

– Detection of errors, their analysis and correction, as well as drawing conclusions from them is a fundamental task of any researcher. – The researchers are morally obliged to be critical with respect to their own research results and to be grateful for external criticism. – The reliable functioning of efficient mechanisms of organised criticism is a systemic necessity of science.19 In parallel with the above-outlined, ethically important, positive tendency in the development of science in the twentieth century, a negative trend towards relativisation of values associated with science emerged – the trend which may lead to the disintegration of the whole system of those values. The traditionally recognised values are reinterpreted in such a way that they are considered valid only under certain circumstances or conditions. This opens the way to justified resignation from some of them and affirmation of anti-values. The following are the most dangerous symptoms of relativisation of the values associated with science: – blurring the boundaries of science, manifested in the growing indifference of the scientific community to the differences between strictly scientific activities (and their products) and the peri-scientific activities (and their products) such as research management and technical, expert or journalistic work; – deviation from scientific standards and criteria, manifested in the increasingly widespread use of benchmarks and indicators defined outside science, such as short-term practical utility; – shifting individual priorities from the pursuit of scientific truth to the elimination of competitors in applying for research grants and in fighting for power in scientific and peri-scientific institutions; – accepting the supremacy of non-scientific authorities, e.g. scientific administration, in resolving scientific issues; – fuzzification of the status of truth as the central value of science, manifested in the increasingly common treatment of scientific work as an activity aimed at personal benefits or gaining fame or publicity; – accepting the lack of freedom of science, manifested in the increasingly common belief that one can be a researcher without exercising freedom to effectively search for scientific truth; – diminishing interest for cognitively important problems, motivated by a conviction that dealing with them is not only waste of time and money but also a risk of exposure to troubles and distress.20

19 K. R. Popper, In Search of a Better World: Lectures and Essays from Thirty Years, Routledge, London 1994, Chapter 4. 20 J. Goćkowski, Ethos nauki i role uczonych, Wyd. “Secesja”, Kraków 1996, pp. 244–246.

15.4 Choice of a research problem

315

15.4 Choice of a research problem The choice of a research problem is considered here from the perspective of an individual researcher, such as the head of a research group. Some other very important and difficult issues related to the choice of research remain beyond the scope of this section, viz. the priorities selected and promoted by scientific institutions, state authorities and international organisations. Those issues are usually considered by specialised philosophical literature devoted to the evolution of our civilisation21, or to the assessment of risk associated with implementation of new technologies22, such as nanotechnologies23 or biotechnologies24. The following criteria are most often taken into account in the selection of a research problem: the significance and originality or novelty of research objectives or of methods for attaining them, the ethical quality of research objectives and the probability and costs of attaining them. The first two of those criteria, i.e. significance and originality, play a key role in assessing whether the research problem can be regarded as scientific or not: neither a new but insignificant problem nor significant but already solved problem can be recognised as scientific. Failure to meet one of these criteria is the most common weakness of the research projects described in grant applications and in publications reporting on their results, recently also in numerous articles published by renowned journals. The criterion of cognitive novelty seems to undergo trivialisation. More and more numerous representatives of applied sciences publish mathematical or logical considerations which could be worked out a vista, without special research effort, by a sufficiently proficient mathematician or theoretical physicist. On the other hand, many conference articles – written only with the purpose to enable their authors to get institutional support for visiting attractive places where scientific conferences are held – are deprived of any significance.

21 e.g. D. Birnbacher, Verantwortung für zukünftige Generationen, Reclam Verlag, Stuttgart 1988; H. Ruh, T. Gröbly, Die Zukunft ist ethisch – oder gar nich, Waldgut Verlag, Frauenfeld 2008. 22 e.g. L. Asveled, S. Roeser (Eds.), The Ethics of Technological Risk, Earthscan-Sterling, London 2009. 23 e.g. F. Allhoff, P. Lin, J. Moor, J. Weckert (Eds.), Nanoethics – The Ethical and Social Implications of Nanotechnology, Wiley & Sons, Hoboken (USA) 2007. 24 e.g. Biotechnology Risk Assessment Research Grants Program, United States Department of Agriculture, National Institute of Food and Agriculture, https://nifa.usda.gov/funding-opportunity/bio technology-risk-assessment-research-grants-program-brag [2018-06-13].

316

15 General issues of research ethics

Example 15.6: Computers are indispensable in all domains of science. In many of them, the demand for computing power cannot be met despite the growing speed of consecutive generations of computers. Thus, research on faster algorithms for data processing has not been outdated. There is, however, a symptomatically large number of conference articles devoted to a minor reduction in computational effort, e.g. by 10%, presented as a scientific achievement. This is especially problematic in research projects where changing a computer or basic software can bring acceleration of calculations, e.g. by the factor of 10.

The novelty of the research objectives or research methods increases the risk of a project failure which is, as a rule, negatively perceived by the majority of research sponsors. We cannot, however, avoid this risk because there is no room for scientific research if it is absent. We must be prepared for the evolution of the project during its implementation, in particular – for abandoning some initial research objectives or replacing them with new ones. The choice of research methods and tools also involves the risk that certain values will be jeopardised. On the other hand, we should not forget about the “risk of success”, i.e. the risk of occurrence of unexpected research outcomes, important for technoscience or society. The assessment of the moral admissibility of an applied research project refers, primarily, to the research objectives, i.e. to the expected or planned cognitive results of that project and to the intended practical use of those results (knowledge), often being the main justification for launching and financing the project. The moral evaluation of research objectives should, first of all, refer to the fundamental principle of medical ethics: primum non nocere; quite often, however, this may be impeded by our limited capacity to foresee the consequences of scientific achievements. Example 15.7: In the period 1962–1970, the American army dropped in Vietnam over 55,000 tons of a herbicide, causing the leaves of plants to fall off. It was a synthetic substance identified by Arthur W. Galston (1920–2008), from University of Illinois, and described in his 1943 Ph.D. thesis. In low concentrations, this substance accelerates the flowering of plants and therefore enables harvesting in the zones of cooler climate. This was the motivation behind his research work, while the observation that in higher concentrations this substance deprives the tree of leaves was its sideeffect. The latter property of the herbicide was extensively exploited for the purpose of war. The use of the herbicide was banned only 5 years before the end of the Vietnam War under the pressure of a group of scientists, including Arthur W. Galston. He wrote then, “I used to think that one could avoid involvement in the anti-social consequences of science simply by not working on any project that might be turned to evil or destructive ends. I have learned that things are not that simple. The only recourse is for a scientist to remain involved with it to the end”.25

The implementation of the principle primum non nocere may be also hindered by uncertainty about the moral standards of the potential users of research results. 25 based on: On Being a Scientist: Responsible Conduct in Research, 2009, p. 49.

15.4 Choice of a research problem

317

There is, probably, no such discovery or invention which, in the absence of moral constraints, could not be turned against humanity. It seems that the only reasonable option is to assume that some of such constraints still work in society and to make every effort to anticipate the negative and inevitable consequences of our research. Example 15.8: Genetic information allows almost unambiguous identification of a person, as well as probabilistic characterisation of his health condition. It is today used not only by police, security agencies and judicial authorities, but also – more and more frequently – by civil services and private institutions. The most controversial instances of this practice are the following: – the use of genetic information in the procedures of recruiting new workers, in order to reduce the risk of employing people who, due to their genetic propensity for some occupational diseases, are less suitable for working in difficult and harmful conditions; – the use of genetic information by insurance companies to justify a favourable (for them) diversification of insurance rates. Moral doubts are surrounding also the prospects of human enhancement produced through genetic manipulations aimed at increasing physical potential of athletes or intellectual potential of scientists and politicians.26

After decades of intensive research, genome-editing technologies have already reached the stage of clinical trials in humans (e.g. in China)27. Up to now, they have demonstrated two important findings: genetic engineering is possible in human embryos, but the available technologies require essential improvements before they can be applied in medical practice. Those findings intensify the discussion over ethical aspects of editing human genome. The participants of this discussion, who suffer from genetic diseases, ask why we must not manipulate nature if we can safely prevent severe genetic diseases and create healthy humans (as we have already done it in some animal populations). The other participants of this discussion object by pointing to the fact that for the time being the long-term effects of genome editing remain unknown. The most pragmatic participants of the discussion articulate the need for strict international regulations concerning research on editing human genome. Since such regulations, they argue, are in place for nuclear power, the same should be possible for genetic engineering of human embryos. By the end of 2017, the US National Academy of Sciences and the US National Academy of Medicine launched a human gene-editing initiative aimed at providing “researchers, clinicians, policymakers, and societies around the world with a comprehensive understanding

26 cf. C. L. Munro, “Genetic Technology and Scientific Integrity”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 247–268. 27 L. Tang, Y. Zeng, H. Du, M. Gong, J. Peng, B. Zhang, M. Lei, F. Zhao, W. Wang, X. Li, J. Liu, “CRISPR/Cas9-mediated Gene Editing in Human Zygotes Using Cas9 Protein”, Molecular Genetics and Genomics, 2017, Vol. 292, No. 3, pp. 525–533.

318

15 General issues of research ethics

of human gene editing to help inform decision-making about this research and its application”28. They have justified this initiative in the following way: “Powerful new gene-editing technologies [. . .] hold great promise for advancing science and treating disease, but they also raise concerns and present complex challenges, particularly because of their potential to be used to make genetic changes that could be passed on to future generations, thereby modifying the human germline”29. Are we able to predict the effects and side-effects of morally questionable applications of genetic engineering? A strictly negative answer would be a kind of hypocrisy when we can read about them in science-fiction literature. Similarly, it would be hypocritical to say that the ecological effects of the widespread use of the findings of nanotechnological research in production of cosmetics cannot be foreseen. The fact that we are dealing with undesirable applications of genetic techniques and the undesirable follow-ups of the development of nanotechnology is a consequence of numerous individual and institutional decisions consisting in the selection of “lesser evil”. The consequences of a managerial decision, being delayed in time and affecting society in general, are usually perceived as lesser evil than immediate consequences affecting profits of a company or the level of unemployment in a related sector of economy. The consequentialist way of thinking, which frequently justifies the choice of lesser evil, arouses many doubts of deontological nature30 because it may not always be as obvious and morally unmistakable as in the case of suffering painful, but healing, injection (lesser evil) instead of waiting for certain death (greater evil). The moral responsibility for the negative consequences of scientific research can be attributed to the researchers only when these consequences are both inevitable and predictable. The likely negative effects of a discovery made in basic sciences can only be recognised in its application – so, they are neither predictable nor necessary because they depend on a free and conscious choice of those applications. The situation in applied sciences is quite different: most modern research funding systems favour projects which target at least the range of potential applications. It might seem that this situation should encourage researcher to more insightfully reflect on the possible consequences of attaining research goals. The international experience demonstrates, however, that it is not always a case: the instances of artificially exaggerating the significance of the project described in the grant application, to increase the chance of its financing, is a more frequent response to this situation. The same research project may be problematic under certain circumstances, and not incite any moral doubts under others.

28 The website of the initiative is located at http://nationalacademies.org/gene-editing/index.htm [2018-03-03]. 29 ibid. 30 B. Chyrowicz, O sytuacjach bez wyjścia, 2008, pp. 33–334, 377.

15.4 Choice of a research problem

319

Example 15.9: Scientific research, absorbing extensive financial resources and engaging large human resources, is conducted in many economically developed countries while millions of inhabitants of the Third World are suffering hunger. Is it morally acceptable to continue this type of research?

A young holder of a scientific degree, interested in a research career, is frequently confronted with difficult decisions related to scientific self-determination, including determination of one’s own research area – with the necessity to choose between basic research and more profitable applied research, or the necessity to choose between military and civilian institutions as a workplace31. Example 15.10: Dr. A, just after receiving the Ph.D. degree, is about to choose the area of his further research activity. He is offered, by a military research institute, a job related to an attractive project “A system for guiding missiles towards live targets”. The following arguments speak in favour of undertaking this job: – novelty and significance of the topic, – access to well-equipped research laboratories, – prospects for mastering a technology of the future (and prospects for related career developments), – possible defensive applications of research results, – possible civilian applications of research result (after expiration of the non-disclosure agreement), – prospects for high salary and for its growth with successful accomplishments of the project, – prospects for awards, honours, distinctions, etc. There are, however, also serious arguments speaking against undertaking the job offered by the military research institute: – possible aggressive applications of research results, – secrecy of research results, excluding their publication or patenting, – constraints imposed on international contacts (on research networking), – constraints imposed on the transfer of research results to the sphere of education, – possible negative reactions of the research community (ethical doubts related to the research subject, disproportional financing, etc.) – risk of getting addicted to an easy personal success. When reflecting on the decision, Dr. A will probably also refer to the following statements, which should be regarded as rationalisations rather than sterling arguments: – “if not me, then somebody else. . .,” – “I will have control over the follow-ups of my research,” – “I will sabotage implementation of research results,” – “if you want peace, prepare war,” – “all peoples of the world arm themselves,” – “everybody in my place. . .,” – “the results will never be applied,” – “researchers are not responsible for applications of their research results.”

31 A Guide to Teaching the Ethical Dimensions of Science, 2016.

320

15 General issues of research ethics

Thus, when choosing research problems, we may be confronted with moral doubts when the intended research results can serve both good (e.g. defence) and evil (e.g. aggression) purposes; when the resources allocated for the research in question could be better used (e.g. for the projects aimed at aiding poor countries) or when the intention of the sponsor is not unambiguously positive. The two most common schemes of unethical sponsorship in practice are the following: – The sponsor, motivated by profit, intends to “selectively” use the results of research or the conclusions derived from them (a typical example: the pharmaceutical industry). – The sponsor wants to subordinate the research programme to political aims or struggle for power (a typical example: the arms industry). Example 15.11: The tobacco industry has a decades-long history of hiding the truth about the harmful effects of its products by casting doubts on scientific evidence linking tobacco use and disease and death. In 1993, the US Environmental Protection Agency issued a document stating that environmental tobacco smoke should be classified as a Class A carcinogen. The tobacco industry – confronted with the probable ban on smoking in public places and, consequently, a decrease in tobacco sales – established an agency, Center for Indoor Air Research, which financed hundreds of research projects whose results, presented at many international conferences, questioned the resolution of the Environmental Protection Agency.32

15.5 Choice of research methodology 15.5.1 Ethical background of research methodology In principle, no ethical constraints should be imposed on freedom of defining research goals, research planning, formulating hypotheses, studying literature, designing experiments and preparing their infrastructure. However, a mistake made during those actions can be a source of evil which may be identified only during execution of experiments or implementation of research results; those actions, therefore, also require some ethical reflection. The choice of the research methodology may include, in particular, decisions concerning the possible inclusion of human or animal subjects into experimentation. Such decisions may imply the most serious ethical problems since the fact of seeking the truth does not justify morally problematic experiments. Observation and measurement can be a source of evil if their completion requires the creation of some unnatural, possibly harmful, state in a human or 32 D. Jenson, “Racketeers at the Table: How the Tobacco Industry is Subverting the Public Health Purpose of Tobacco Regulation”, A Law Synopsis by the Tobacco Control Legal Consortium, November 2013, http://www.publichealthlawcenter.org/sites/default/files/resources/tclc-synopsis-racket eers-table-2013.pdf [2018-01-10].

15.5 Choice of research methodology

321

animal organism under study. Such a risk occurs, for example, when a test of a new pharmaceutical includes measuring certain physiological parameters of an organism before and after administration of that pharmaceutical. This also occurs when that organism needs to be exposed to certain radiation to visualise the functioning of one of its organs, e.g. of the brain. The integration of physical sciences with life sciences is a characteristic feature of their development in the twenty-first century. The prefix “bio” in the names of some new scientific fields is a sign of this tendency: we have biochemistry and biophysics, biocybernetics and biomedical engineering. The researchers working in these fields more and more often participate in experiments with the involvement of human beings. Such experiments are also conducted in the field of psychology and sociology, as well as in the research on marketing and ergonomics, where ethical problems are mainly related to the possibility of manipulation of human consciousness. This brief overview shows that a significant percentage of researchers can today participate in experiments with human involvement and, consequently, face moral dilemmas related to such experiments. Finally, it is worth mentioning the non-ethical constraints imposed on the choice of the research methodology by specific conventions and standards, codified or not, which are applicable in a given discipline. These conventions and standards are indispensable for distinguishing grossly unscientific speculations from scientific inquiry, but at the same time they may slow down the progress of research in the areas directly concerned. Example 15.12: The Guide to the Expression of Uncertainty in Measurement33, already cited in Chapters 6 and 10, is a standard that applies to all empirical sciences. It normalises methodologies of numerical evaluation of measurement uncertainty in various experimental situations, differing in the amount of available information that can be used for this purpose. It was introduced in 1993 by the International Organization for Standardization (ISO) in the atmosphere of scepticism expressed by the representatives of various research communities since, for obvious reasons, it did not encompass the whole variety of experimental situations in which measurement uncertainty should be assessed. At the same time, however, it created an opportunity to harmonise the procedures used for assessing the uncertainty of routine measurements, e.g. measurements carried out by engineers in industrial laboratories. It also begot an opportunity to increase the reliability of the uncertainty assessment made by experimenters with limited metrological preparation, especially those active in the areas of research distant from physics and chemistry. Twenty-five years after introduction of the ISO Guide to the Expression of Uncertainty in Measurement, one may note a paradoxical side-effect of the implementation of this standard, viz.: many scientists active in the field of measurement science – instead of developing new methods for assessing the uncertainty of measurement, adequate to the subject of their research – take up equilibristic attempts to adapt to the requirements of this standard.

33 Joint Committee for Guides in Metrology (BIPM+IEC+IFCC+ILAC+ISO+IUPAC+IUPAP+OIML), Evaluation of measurement data – Guide to the expression of uncertainty in measurement.

322

15 General issues of research ethics

Much more damaging than rigid standards hindering the development of science can be rigid research paradigms, such as dialectical materialism34 imposed in times of real socialism on the research in sociology and psychology35.

15.5.2 Laboratory notebook The obligation to keep a primary record of research seems to be uncontroversial regardless of the scientific discipline. It is usually fulfilled by documenting hypotheses and experiments, including preliminary analyses and interpretations of their results, in the form of a laboratory notebook. Its importance is well verbalised by the following statement: “Detailed note-keeping is a prerequisite for, if not a key component of, scientific discovery”36. The laboratory notebook serves as an organisational tool, a memory aid and can also help in protecting any intellectual property resulting from research works. There are no general requirements concerning the form of a laboratory notebook, they vary widely among research institutions and among individual laboratories, but there are some common guidelines which can be found both in the books – such as classical handbook Writing the laboratory notebook37, authored by the American chemist Howard M. Kanare and published in 1985 (156 pages) – and in the concise codes of good practices, published by various scientific and peri-scientific institutions – such as Guidelines for scientific record keeping. . .38, first published by the National Institutes of Health in 2008 (20 pages). A traditional paper-based laboratory notebook is typically a file of permanently bound and numbered pages. All entries, labelled with dates, should be introduced with a permanent writing tool as the research work is progressing. Those entries should indicate the original place of record of experimental data, as well as its identifier. According to the best tradition, a laboratory notebook should be thought of as a diary of activities that are described in sufficient

34 cf. the definition in the relevant Wikipedia article available at https://en.wikipedia.org/wiki/Dia lectical_materialism [2018-03-09]. 35 cf. the definition in the relevant Wikipedia article available at https://en.wikipedia.org/wiki/ Real_socialism [2018-03-09]. 36 S. Y. Nussbeck, P. Weil, J. Menzel, B. Marzec, K. Lorberg, B. Schwappach, “The Laboratory Notebook in the 21st Century”, EMBO reports, 2014, Vol. 15, No. 6, pp. 631–634. 37 H. M. Kanare, Writing the Laboratory Notebook, American Chemical Society, Washington D.C. 1985. 38 Guidelines for Scientific Record Keeping in the Intramural Research Program at the NIH, Office of the Director National Institutes of Health, USA 2008, https://oir.nih.gov/sites/default/ files/uploads/sourcebook/documents/ethical_conduct/guidelines-scientific_recordkeeping.pdf [2018-03-10].

15.5 Choice of research methodology

323

detail to allow another scientist to replicate them. If a laboratory notebook serves a research group, the entries should be signed by those making them. A laboratory notebook, satisfying the above-specified requirements, is useful for proving the originality of research findings, in particular – as evidence in legal proceedings. Today, the traditional paper-based notebooks more and more frequently are replaced with electronic laboratory notebooks. The numerous advantages of the latter solution39, although still not fully appreciated by some conservative institutions active in non-technical areas of research, justify the prediction that the paper-based record keeping will disappear within a decade. For the time being, the multi-access electronic laboratory notebooks are most frequently used for keeping record of collective work in large research institutions such as researchand-development departments of large pharmaceutical companies. Some research institutions have recently joined the Open Notebook Science movement, which means that they are sharing their laboratory notebooks online in real time, without password protection and without limitations on the use of their data40. The widespread use of computers in research has produced – in many research institutions, especially academic institutions – a paradoxical effect: the abandoning the practice of keeping a laboratory notebook, being in the past a basic tool for controlling and coordinating the research process, and at the same time a tool for preventing errors and abuses that could be committed by researchers41. The reason for the disappearance of this good practice is a conviction that – since all research documents, including programs and results of computation are stored in computers – it is unnecessary to keep any extra record in the form of a laboratory notebook. It turns out, however, that it is an illusion. Research reports, especially those published as journal articles or conference papers, very often do not contain important details, and almost never ideas that have appeared during research works, but have not been analysed or developed due to the lack of time. Computer programs often do not contain sufficiently detailed comments, so that – after a year or two – they cannot be fully understood even by their authors. For all these reasons, the results of measurements and observations should be kept in the laboratory notebook in their raw form, as to make possible the repetition of their statistical or numerical treatment in the future. The

39 S. Y. Nussbeck, P. Weil, J. Menzel, B. Marzec, K. Lorberg, B. Schwappach, “The Laboratory Notebook in the 21st Century”, 2014. 40 More information on Open Notebook Science and examples of institutions which joined this movement may be found in Wikipedia at https://en.wikipedia.org/wiki/Open_notebook_science [2018-03-10]. 41 cf. F. L. Macrina, “Scientific Record Keeping”, 2005 (3rd edition), pp. 270–271.

324

15 General issues of research ethics

same applies to pre-processed data, as well as to intermediate and final results of the computations made. In any case, it is important to record in the notebook full information about the methods, algorithms and programs used for data processing. It is also important to provide the obtained results with comments containing the essential elements of the discussion of these results and the conclusions following this discussion; such comments are already an important element of preparing the results for publication42.

42 cf. ibid., pp. 281–282.

16 Ethical aspects of experimentation The errors or mistakes, committed by researchers during the design or technical and organisational preparation of an experiment, may cause damage or harm to the researchers themselves, to the objects of their study (when they are humans or animals) or to the scientific community. In the first case, it is a personal threat to the life or health of the researchers (as discussed in Subsection 16.4.2), or the threat to their further scientific career, which occurs when the errors or mistakes made lead them to a false research trail or undermine their epistemic authority in the eyes of the scientific community. Experiments with the involvement of humans or animals are subject to strict legal constraints which are addressed in a separate section (Section 16.2).

16.1 Typology of experiment-related misconduct The requirements of honesty, integrity and diligence obviously apply to experimentation. Failure to meet these requirements may lead to littering the scientific infosphere with false information being a product of deliberately “setting up” an experiment for the expected result, of falsifying the results of an experiment or of its negligent execution. False information is misleading for the members of the scientific community, and it may pose a threat to the health and life of society, e.g. when it concerns medicines or nutrition. Research misconduct, associated with experimentation, may be first of all attributed to the moral weakness of researchers and their ignorance concerning methodology of experimentation or relevant ethical standards. There are, however, also some systemic conditions, related to the organisation of research at the national and international level, which may influence its incidence. Constant administrative pressure on cutting research costs, on the one hand, and unceasing haste implied by treating science as business, on the other – these are the most common causes of giving up expensive diligence in implementing research procedures; in particular: of shortening the series of experiments at the expense of statistical significance of their results, and of unjustified substitution of real-world data – with synthetic data. In turn, insufficient theoretical and methodological preparation of the experiment can lead to wasting research resources, mainly time and money. Particularly reprehensible, from an ethical point of view, is the widespread practice of imitating “research rituals”, which consists in performing research operations in a seemingly correct manner, but without understanding their essence, and consequently without any chance for obtaining meaningful results and their productive use1. Among 1 R. P. Feynman, “Cargo Cult Science”, 1974. https://doi.org/10.1515/9783110584066-016

326

16 Ethical aspects of experimentation

the most frequent methodological deficiencies, frustrating research efforts, the following should be mentioned: the lack of criticism with respect to the experimental results, the absence of operations aimed at their verification or validation and the lack of the systematic evaluation of their uncertainty (cf. Section 10.5).

16.2 Experimentation with involvement of humans and animals 16.2.1 Experimentation with involvement of humans Ethical issues related to conducting experiments on humans belong to the core of research ethics and bioethics, being subject of many extensive and comprehensive studies, and covered by many handbooks and other specialised sources of academic knowledge; therefore, they will be presented here only in a very general outline. Experiments play a key role in generation of scientific knowledge, but researchers must not enslave anybody to serve this noble purpose. Conducting experiments with human involvement is, therefore, subject to numerous restrictions resulting from the principles of general ethics, from legal regulations and from international conventions. The Nuremberg Code2 was the first such convention; it was set by the Allied Military Tribunal in 1947, just after the Nuremberg trials which made the world aware of the pseudo-medical studies conducted by Nazi “scholars” on tens of thousands of people imprisoned in concentration camps. It contained 10 principles of human experimentation: 1) “The voluntary consent of the human subject is absolutely essential. [. . .]” 2) “The experiment should be such as to yield fruitful results for the good of society, unprocurable by other methods or means of study, and not random and unnecessary in nature.” 3) “The experiment should be so designed and based on the results of animal experimentation and a knowledge of the natural history of the disease [. . .] that the anticipated results will justify the performance of the experiment.” 4) “The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury.” 5) “No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur; [. . .].” 6) “The degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment.”

2 Its historical background may be found in the Wikipedia article “Nuremberg Code” available at https://en.wikipedia.org/wiki/Nuremberg_Code [2018-03-12].

16.2 Experimentation with involvement of humans and animals

327

“Proper preparations should be made and adequate facilities provided to protect the experimental subject against even remote possibilities of injury, disability, or death.” 8) “The experiment should be conducted only by scientifically qualified persons. […]” 9) “During the course of the experiment, the human subject should be at liberty to bring the experiment to an end if he has reached the physical or mental state where continuation of the experiment seems to him to be impossible.” 10) “During the course of the experiment the scientist in charge must be prepared to terminate the experiment at any stage, if [. . .] a continuation of the experiment is likely to result in injury, disability, or death to the experimental subject.”3 7)

It is a paradox of the twentieth-century history that the Prussian state, playing a key political role in the German Empire before the First World War, was the first in the history to introduce (in 1900) the ban on non-medical experiments on humans, as well as the obligation to inform a person participating in a medical experiment about its possible negative consequences and to get consent of that person4. It is also a paradox of the history that, during the first three decades of the twentieth century, not Germany but the USA and Sweden were leaders in the implementation of eugenic programmes, including compulsory sterilisation and euthanasia of people with disabilities, especially mentally handicapped persons5. Less known than the criminal experiments of Nazi “scientists” are the experiments with chemical and biological weapons, carried out during the Second World War in the countries conquered by Japan. In return for an access to the results of those experiments, the US government refrained from prosecuting the Japanese scientists and medical doctors involved in them. In this way, the results of Japanese experiments became known to the general public at the end of the twentieth century6. The most important international guidelines for experimentation on humans have been catalogued in the following documents which appeared after The Nuremberg Code: – Universal Declaration of Human Rights (United Nations General Assembly, 1948)7; – Declaration of Helsinki (World Medical Association General Assembly, 1964, 1975, 1983, 1989, 1996, 2000, 2002, 2004, 2008, 2013, 2017)8; 3 “The Nuremberg Code”, Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law, 1949, Vol. 2, No. 10, pp. 181–182. 4 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), p. 241. 5 E. Black, War Against the Weak: Eugenics and America’s Campaign to Create a Master Race, Dialog Press 2012. 6 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), p. 242. 7 whose full text available at http://www.un.org/en/udhrbook/pdf/udhr_booklet_en_web.pdf [2018-03-13]. 8 whose full text is available at https://www.wma.net/policies-post/wma-declaration-of-helsinkiethical-principles-for-medical-research-involving-human-subjects/ [2018-03-13].

328

16 Ethical aspects of experimentation

– International Covenant on Civil and Political Right (United Nations General Assembly, 1966)9; – International Ethical Guidelines for Biomedical Research Involving Human Subjects (Council of International Organizations of Medical Sciences/World Health Association, 1982, 1993, 2009)10; – International Guidelines for Ethical Review of Epidemiological Studies (Council of International Organizations of Medical Sciences/World Health Association, 1991, 2009)11. A researcher conducting an experiment with human involvement is to some extent morally and legally responsible for possible harm inflicted on people in the aftermath of their participation in that experiment. The scope of moral responsibility is, as a rule, much broader than the scope of legal responsibility; both depend on the researcher’s role in the project. The compliance of some actions with the corresponding legal regulations does not necessarily mean that they are morally admissible; however, it is a sine qua non condition for conducting experiments with human involvement. The next requirement is related to the compatibility of experimental procedures with the principles and rules of conduct, set forth in the ethical codes promulgated by various professional associations12, especially those uniting representatives of medicine- and healthcare-related professions. Neither legal regulations nor codes of professional ethics, however, resolve all complex moral issues that may arise in research with human involvement. Many such issues are related to the concept of informed consent which means the consent expressed by each participant of an experiment, after receiving sufficiently rich information about potential consequences of his participation in that experiment. More precisely, a participant expressing informed consent: – should be able to make decisions (not necessarily good, but own), and understand their consequences; – should be sufficiently informed to decide about participation in the experiment; – should decide voluntarily (i.e. be free from any kind of coercion).13

9 whose full text is available at https://treaties.un.org/doc/publication/unts/volume%20999/vol ume-999-i-14668-english.pdf [2018-03-13]. 10 whose full text is available at https://cioms.ch/shop/product/international-ethical-guidelinesfor-biomedical-research-involving-human-subjects-2/ [2018-03-13]. 11 whose full text is available at https://cioms.ch/wp-content/uploads/2017/01/International_Ethi cal_Guidelines_LR.pdf [2018-03-13]. 12 On Being a Scientist: Responsible Conduct in Research, 2009, p. 24. 13 cf. J. Savulescu, T. Hope, “The Ethics of Research”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010.

16.2 Experimentation with involvement of humans and animals

329

It may be completely impossible to meet the above conditions for young children or persons with mental disabilities or severe injuries, while research motivation in such cases is particularly strong. Various attempts have been, therefore, undertaken to modify the rules of giving informed consent as to make possible ethically acceptable treatment of such cases, as well as the cases of prisoners and people with insufficient education14. Experiments with human involvement can provoke many other difficult questions and even ethical dilemmas. Here are some of them: – Is it acceptable to sacrifice the good (dignity or rights) of a person to the good of society (e.g. when developing a new medicine)? – What is the level of risk that can be accepted in research with human involvement because of the anticipated social benefits of this research? What are the ethically acceptable criteria of weighting risk and benefits? – To what extent the structure of social benefits of this research should be taken into account in the ethical analysis of experiments with human involvement? – What are the ethically acceptable criteria for distinguishing scientific research from medical practice? – To what extent the principles of experimentation with human involvement should be global, and how should they take into account local conditions and traditions?15 The research ethics boards, whose role is to evaluate research projects including experiments with human involvement, usually base their opinions on the answers to the questions concerning, first of all, the level of risk and the ways of its mitigation. Here are some examples of such questions: – What are the risks associated with the planned research? Do the expected benefits of the planned research outcomes justify the level of risk taken? – Is the selection of research participants fair? How are they to be recruited? Do they have confidentiality guaranteed? Will any informed consent be obtained from each of them? How will it be documented? – Are the research procedures methodologically correct? How will the acquired data be monitored to protect research participants from the risk of the “leakage” of those data? – Are there any conflicts of interest identified? The research projects in the domain of medicine, pharmacology and broadly understood healthcare science include, by their nature, experiments with human

14 On Being a Scientist: Responsible Conduct in Research, 2009, p. 25. 15 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), Chapter 11; On Being a Scientist: Responsible Conduct in Research, 2009, p. 25.

330

16 Ethical aspects of experimentation

involvement. Besides general ethical problems, which are common for this type of experiments, they engender specific moral issues which require separate consideration. Example 16.1: Because of the global scope and tremendous social impact, the clinical trials – aimed at comparing the efficacy of newly developed drugs with previously used or with placebo – are subject to ongoing discussion. The moral issues associated with such trials are the following: – a new drug administered to the experimental group of patients may turn out to be less effective than the reference drug (or placebo) administered to the control group; – the short-term efficacy of a new drug does not necessarily mean its long-term efficacy; – the lack of immediate negative side-effects the test does not exclude their delayed appearance.16

To be statistically significant, the clinical trials, aimed at testing new drugs, must involve large groups of patients. Since the risk associated with such trials is difficult to manage, there are a number of bioethical issues which are incessantly discussed by the interested research communities; among them the methods for selecting experimental and control groups, the scope of responsibility of research institutions with respect to those groups and the criteria for determining the scope and duration of experiments. The authors of a 2010 encyclopaedic article discuss eight such criteria (which they call “strategies”), and do not find among them anyone which does not arouse ethical or methodological doubts17.

16.2.2 Experimentation with involvement of animals The use of animals in basic and applied research (mainly in the field of medicine, pharmacology and biomedical engineering) is usually justified by the benefits (such as health or physical and mental comfort) that its results may bring to society and sometimes also to animals. This justification arouses doubts of both ethical nature (noble goals do not necessarily sanctify the means) and praxeological nature (this is not necessarily the most effective method to achieve the intended goal). According to the Australian moral philosopher Peter A. D. Singer (*1946), a well-known advocate of animal rights, animals capable of suffering deserve a moral status similar to that of human beings; as a utilitarian ethicist, he believes that experiments with animals are allowed if their results promote the greatest good for the greatest number of beings that deserve moral consideration – humans and animals.18

16 A Guide to Teaching the Ethical Dimensions of Science, 2016; J. Savulescu, T. Hope, “The Ethics of Research”, 2010, p. 785. 17 ibid., pp. 786–788. 18 P. Singer, Animal Liberation: A New Ethics for Our Treatment of Animals, HarperCollins, New York 1975.

16.2 Experimentation with involvement of humans and animals

331

Despite the generally shared conviction that inflicting unnecessary suffering to animals is evil, the view prevails that it would be a mistake to completely abandon the use of animals in research aimed at the benefit of humans. According to many philosophers, there are important features of the latter, differing them from animals, which speak against the postulates of Peter A. D. Singer to give a similar moral status to humans and animals, viz.: the ability to think rationally using abstract concepts and to communicate using complex languages; the ability to experience emotions such as compassion and empathy, shame and fear; the intellectual and material creativity; as well as the spirituality manifested by the need to search for the meaning of life and by the ability to make morally significant decisions. Since 1975, when the first edition of the Peter A. D. Singer’s book Animal Liberation appeared, the activists of the animal rights movement have been fighting for a complete ban on using animals for research purposes. The increase of support for this idea among researchers may, in a long run, slow down the progress of many projects in the field of biomedicine and pharmacology, and consequently endanger the health condition of society19. Despite the diverse views concerning the moral status of animals, the following recommendations regarding the use of animals in research are quite commonly accepted by both practitioners and philosophers of science: – The use of animals should be discontinued as soon as a research problem can be solved by means of the methods that do not require their use; “higher” animals (e.g. mammals) should be replaced with “lower” animals (e.g. reptiles) whenever it is possible to solve the research problem using the latter. – The number of animals, used for research purposes, should be reduced when it is possible to solve the research problem using fewer animals. – Whenever possible, animal suffering should be minimised by improving the research methodology and research tools. – When formulating the research problem, one should carefully consider the balance of anticipated benefits resulting from expected research outcomes and losses consisting in suffering of animals used for experimentation. – The repetition of experiments (involving animals) that have already been reliably carried out by a credible institution should be avoided. The first three recommendations are referred to in the relevant literature as “three Rs”, and all five – as “five Rs”: replacement, reduction, refinement, relevance and redundancy avoidance.20

19 cf. B. A. Fuchs, F. L. Macrina, “Use of Animals in Biomedical Experimentation”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 127–157. 20 cf. J. Savulescu, T. Hope, “The Ethics of Research”, 2010, p. 782.

332

16 Ethical aspects of experimentation

16.3 Acquisition and processing of experimental data 16.3.1 Fabrication, falsification and theft of data Let’s introduce the subject of this subsection with three cases of research misconduct that shook the scientific community and public opinion at the beginning of the twenty-first century. Example 16.2: In the years 1998–2001, Jan H. Schön, a researcher at Bell Laboratories, published (together with 20 co-authors) a series of 25 articles on the new method of producing superconducting materials having many desirable properties. As established by a special committee, appointed by the management of Bell Laboratories, 16 of those articles contained fabricated results of experiments in the form of pictures and graphs. Jan H. Schön was dismissed from work when the committee confirmed his undisputable role in the perpetration of all those misdemeanours.21

Example 16.3: In 2004–2005, a team of researchers from Seoul National University published two articles in the prestigious American Science journal on the method of producing stem cells from cloned human embryos (at the blastocyst stage). A special committee, established by the Seoul National University management, proved that in both of these articles the fabricated data (including images) were used. The team leader, Hwang Woo-Suk, was accused of embezzling funds for research.22

Example 16.4: Diederik A. Stapel, a Dutch professor of social psychology at the University of Groningen and Tilburg University in the Netherlands, fabricated and falsified data for at least 55 publications, including the article “Coping with Chaos: How Disordered Contexts Promote Stereotyping and Discrimination” which appeared in the Science magazine23. In 2011, the Rector of Tilburg University appointed a committee for investigation of “the extent and nature of the breach of scientific integrity committed by Mr D. A. Stapel”. The report of this committee concluded that Diederik A. Stapel acted according to the following pattern: he designed a complete experiment at the level of hypotheses, methods and questionnaires, and then pretended that he would run the experiments at schools to which only he had access. Instead of doing so, he made up the data and sent them to the collaborators for further processing and interpretation.24

The withdrawal of various industrial products from the market, reported more frequently, indicates that instances of fabrication and falsification of data occur not

21 Report of the investigation committee on the possibility of scientific misconduct in the work of Hendrik Schön and coauthors, Bell Labs, September 2002, https://media-bell-labs-com.s3.amazonaws. com/pages/20170403_1709/misconduct-revew-report-lucent.pdf [2018-06-15]. 22 D. Cyranoski, “Verdict: Hwang’s Human Stem Cells Were All Fakes”, Nature, 2006, Vol. 439, p. 122. 23 D. A. Stapel, S. Lindenberg, “Coping with Chaos: How Disordered Contexts Promote Stereotyping and Discrimination”, Science, 2011, Vol. 332, No. 6026, pp. 251–253. 24 Flawed Science. The fraudulent research practices of social psychologist Diederik Stapel, Levelt Committee, Noort Committee & Drenth Committee, Tilburg University, November 2012.

16.3 Acquisition and processing of experimental data

333

only in research institutions but also in the research-and-development departments of manufacturing companies25. Fabrication and falsification of data, together with plagiarism, are the most serious offenses against the research ethics. As already mentioned in Chapter 15, the falsification of research results occurs not only when certain data (results of measurements or calculations) are omitted or modified in research reports or scientific publications but also when research materials, equipment or processes are manipulated. What distinguishes an infringement from a negligence mistake is the intention to deceive. Already in 1830, Charles Babbage distinguished four types of reprehensible practices related to data manipulation: – fabrication of data in a way that ensures confirmation of a research hypothesis (hoaxing data); – fabrication of data by means of real-data imitation (forging data); – elimination of uncomfortable data (trimming data); – designing the experiment in a way that provides the desired result (cooking data). The latter practice may consist in resignation from measuring important quantities or parameters or in limiting the acquisition of data to short series which are not statistically representative26. If elimination of uncomfortable data is concerned, the following example, probably most frequently referred to in the relevant literature, well illustrates the essence of that problem. Example 16.5: In 1910, Robert A. Millikan, the American physicist, developed a method for the estimation of the electron charge – the method based on measuring the velocity of oil drops falling in an electric field. Since the speed of drops depended on the charge riding on it, he was able to show that the charges on the drops were multiples of the same number, viz. the charge of the electron. In an article on this subject, which he published three years later, he wrote: “[. . .] this is not a selected group of drops but [. . .] all of the drops experimented upon during 60 consecutive days”27. The analysis of his laboratory notebooks reveals, however, that he used in the article only selected results of the experiment. Was he a liar or a genius of intuition? This question arouses controversy even after a century, especially in the context of the Nobel Prize he won in 1923.

In any measurement procedure, outliers may occur; the number of their sources is growing with the complexity of that procedure. How to distinguish them from “regular” errors? The answer to this question is important because one should remove

25 cf. D. B. Resnik, The Price of Truth – How money affects the norms of science, 2007, p. 87. 26 ibid., pp. 85–86. 27 D. Goodstein, “In the Case of Robert Andrews Millikan”, American Scientist, January-February 2001, pp. 54–60, www.its.caltech.edu/~dg/MillikanII.pdf [2011-03-04].

334

16 Ethical aspects of experimentation

defective data and preserve those that cannot reasonably be considered as such. Numerous recipes for detection of outliers have been developed over the last century28. Here, the simplest among them will be briefly outlined. Example 16.6: An elementary method for detection of outliers, dedicated to a series of scalar measurements, consists in iterative estimation of the expected value and standard deviation of this series and deleting data which differ from the expected value by more than three standard deviations. Elimination of data ends when all non-eliminated data differ from the expected value by less than three standard deviations. Beside such general-purpose methods for elimination of data corrupted with excessive errors, specialised methods are developed, which are dedicated to specific applications29.

The theft of data is different from the theft of a material object in that their owner does not cease to dispose of them, even though they have been stolen; thus, he may not even note that it happened. Storing data in an electronic form not only facilitates their theft, but also enables the use of new sophisticated methods for protecting them against theft. Data theft can sometimes take very subtle (covered or ambiguous) forms. This is illustrated with another historical example, still vividly discussed by historians of science despite the elapse of time. Example 16.7: In 1962, the British researchers James D. Watson (*1928), Francis H. C. Crick (1916– 2004) and Maurice H. F. Wilkins (1916–2004) received the Nobel Prize in biology for discovering, in 1953, the structure of DNA. The key role in their research, which led to this discovery, was played by the X-ray images of DNA crystals, recorded by the British crystallographer Rosalind E. Franklin (1920–1958). Her colleague Maurice H. F. Wilkins shared them with James D. Watson and Francis H. C. Crick without her consent. The question whether he stole the data is still the subject of controversy30.

28 e.g. J. D. Jobson, Applied Multivariate Data Analysis, Vol. I: Regression and Experimental Design, Springer, New York 1991, Section 2.3 and Subsection 4.4.2; G. E. D’Errico, “Testing for Outliers based on Bayes Rule”, Proc. XIX IMEKO World Congress Fundamental and Applied Metrology, pp. 2368–2370. 29 e.g. N. R. Draper, J. A. John, “Influential Observations and Outliers in Regression”, Technometrics, 1981, Vol. 23, No. 1, pp. 21–26; R. Lleti, E. Meléndez, M. C. Ortiz, L. A. Sarabia, M. S. Sánchez, “Outliers in Partial Least Squares Regression – Application to Calibration of Wine Grade with Mean Infrared Data”, Analytica Chimica Acta, 2005, No. 544, pp. 60–70; P. Filzmoser, K. Hron, C. Reimann, “Principal Component Analysis for Compositional Data with Outliers”, Environmetrics, 2009, Vol. 20, No. 6, pp. 621–632. 30 M. Cobb, “Sexism in Science: Did Watson and Crick Really Steal Rosalind Franklin’s Data?”, The Guardian, June 23, 2015, https://www.theguardian.com/science/2015/jun/23/sexism-in-sciencedid-watson-and-crick-really-steal-rosalind-franklins-data [2018-03-16].

16.3 Acquisition and processing of experimental data

335

16.3.2 Intentional misinterpretation of data Fabrication, falsification and theft of data are indeed the most serious, but not the most common, offenses against the research ethics. They are dangerous, but relatively easily detectable and actually often detected. Sociological studies, based on extensive surveys in scientific communities31, have shown that the incidence of some ethically dubious behaviours of lesser importance is much higher; these are, for example, the following malpractices: the use of inadequate statistical methods for data processing, neglecting the procedures for data verification, neglecting the assessment of their uncertainty, concealing information about this uncertainty. It should be noted that the researcher is responsible for the procedures used for acquisition and processing of measurement data, even if he has used the procedures borrowed from software libraries such as MATLAB toolboxes. The key to correct interpretation of measurement data is the choice of statistical parameters by means of which those data are presented in a synthetic way. Example 16.8: The median will be more informative than the expected value in the comparison of the affluence of the inhabitants of two countries, since the latter can be very high due to the income of a small group of billionaires.

Many misunderstandings and misinterpretations of research results have their origin in the pluralism of statistical methods and multiplicity of degrees of freedom in their application. “There are three kinds of lies: lies, damned lies, and statistics.” This sentence, attributed to the British Prime Minister Benjamin Disraeli (1804– 1881), is grandiloquently referring to the misuse and abuse of statistical methods and statistical data, aimed at bolstering weak argumentation in public debates. Unfortunately, this problem also occurs in scientific discussions. Paradoxically, its importance seems to grow in the times of broad accessibility of abundant libraries of statistical software. The use of statistical methods in empirical sciences may be traced back to the second half of the nineteenth century when the researchers started to realise that these sciences are not able to provide certain outcomes, and therefore they should focus on mitigation and evaluation of the uncertainty of the research results rather than on futile efforts to get rid of it. The diversity of the sources of uncertainty and the multiplicity of the methods for its evaluation (cf. Chapter 10) are the main reasons for some freedom in the application of statistical methods. Contrary to popular belief, these methods are not tools for proving the veracity of research results, but rather tools for disqualifying them when their uncertainty is too high for a 31 e.g. D. Fanelli, “How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data”, PLoS ONE, 2009, Vol. 4, No. 5, p. e5738, www.plosone.org [2011-0307]; S. Godecharle, S. Fieuws, B. Nemery, K. Dierickx, “Scientists Still Behaving Badly? A Survey Within Industry and Universities”, October 2, 2017, pp. 1–21.

336

16 Ethical aspects of experimentation

given cognitive or practical purpose. However, these methods are quite often used in such a way as to suggest the credibility of certain technoscientific findings. Example 16.9: The authority of scholars, especially experts in the field of biomedicine, pharmacy and chemistry, is sometimes used in advertising, most often in TV spots. A trustworthy commentator in a lab coat is referring to the results of “scientific” research on a product – such as toothpaste, washing powder or painkiller – using precise and convincing numbers characterising the effectiveness of the product, e.g. 89.7%. These numbers usually come from statistical surveys carried out by a research laboratory at the request of the manufacturer of the advertised product. They are presented without any definitions, without any comment on the methodology of obtaining them, without estimates of their uncertainty and without any explanation of their practical meaning. The behaviour of scholars, who for profit agree to participate in such practices, is very problematic from ethical point of view, not only if they in person appear on the TV screen, but also if they sell the results of their studies to an advertiser or manufacturer of the advertised product, knowing the intention of the latter.

The significance of conclusions, derived from statistical analysis of experimental data, depends on the number of those data and on their uncertainty. Shortage of data or their high uncertainty can lead to very uncertain or invalid conclusions, while acquisition of very precise data in abundance can unnecessarily increase the costs of experimentation. A certain rule of thumb, quite commonly accepted, says that the probability of false conclusions resulting from the statistical analysis of raw research results should not exceed 5%, and the number and precision of acquired data should be adjusted to this requirement32. New data do not increase our understanding of the phenomenon or problem under investigation if they are not reliable enough; hence the need to thoroughly check them using both general-purpose methods and discipline-specific methods. The first category of those methods includes the repetition of the experiment being a source of data; the measurement of the same quantity by means of independently calibrated instruments of the same type; as well as the analysis of data quality by means of such indicators as the checksum for binary data33. This is one more reason for the careful collection and long-term storing of experimental data, enabling their reliable comparison with new data. This is a researcher’s duty of the same nature as taking care of the equipment and research material used for the experiment (software, measuring instruments, chemicals, bacterial and virus cultures, tissue samples, etc.), justified by the necessity to repeat, whenever necessary, the whole experimental cycle.

32 A Guide to Teaching the Ethical Dimensions of Science, 2016. 33 An overview may be found in the Wikipedia article “Checksum” available at https://en.wikipe dia.org/wiki/Checksum [2018-03-17].

16.3 Acquisition and processing of experimental data

337

The following recommendations are aimed at mitigation of experimental uncertainty and its propagation in the scientific infosphere: – The uncertainty of the result of a quantitative experiment should be assessed and disclosed even if its disclosure may reveal technical or medical risks. – The uncertainty of the result of a qualitative experiment should be characterised by providing an exhaustive specification of factors and circumstances that could have influenced the outcome of the experiment. – In each case, all the experimental work should be recorded in a laboratory notebook (as discussed in Subsection 15.5.1). – The criteria and procedures, used for validation and interpretation of experimental data, should be consistent with the standards applicable in a given area of research. Interpretation of data is an operation largely dependent on the interpreter’s beliefs regarding both the research subject-matter and the research methodology. Interpretation of data by the representatives of different schools of thinking may yield different conclusions; those conclusions may be, in turn, used by different “camps” in public debates over hot topics of social and political importance, such as access to firearms, use of the intelligence quotient in education or the origin of global warming. The problem of moral significance appears when the interpreter of data succumbs to the external pressure, exerted by the research sponsor or research administration, and is shaping conclusions as to meet their requirements. This is particularly common in pharmacology where the researchers are persuaded to exaggerate the advantages of tested drugs – most frequently by their manufacturers.34 The traditional values of science include openness which means, in particular, the readiness to share the data, used in a study, with the researchers interested in repeating this study (cf. Subsection 13.1.3). On the one hand, such a practice may reinforce reliable criticism in science and trust among the members of the scientific community; on the other, it can foster the promotion of research outcomes and attract public support for science. Despite all those benefits, the practice of sharing data is today less common than 30 or 50 years ago, mainly because of: – the expansion of the scope of applied research whose results may be subject to patenting; – the multiple use of the same data for unfair duplication of publications (which is possible only when these data remain undisclosed); – the protection of own reputation in the case when data interpretation might arouse substantial or ethical doubts;

34 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), pp. 72–73.

338

16 Ethical aspects of experimentation

– the protection of confidential data whose disclosure could affect personal goods, business interests or national security; – the belief that due to significant costs of obtaining data, they should remain the exclusive property of the individuals or institutions which acquired them or covered the costs of their acquisition.35 Today, the access to data, generated by state-financed research institutions, is legally guaranteed in the majority of countries36. The first act guaranteeing freedom of the press was promulgated by the Swedish parliament in 1766. In the USA, the availability of data resulting from federally funded research projects has been legally guaranteed by the Freedom of Information Act promulgated in 196637.

16.4 Technical infrastructure of experimentation 16.4.1 Engineering aspects of experimentation The technical and logistic preparation of experiments is an engineering task whose complexity, varying from discipline to discipline, is on the whole growing rapidly in the age of technoscience. This is noticeable even in sociology and psychology where surveys conducted over the internet may require extensive use of information systems and complex software that supports data acquisition, processing and protection against theft. Consequently, scientific laboratories increasingly employ not only technicians but also engineers with high qualifications in electrical and computer engineering. What distinguishes their work from the work of their colleagues employed in industry is the singularity of technical solutions they design and implement: it is rather unusual that a system of data acquisition, developed in the research laboratory A, is used in the research laboratory B. It is not possible, therefore, to test multiple copies of such a system in parallel; thus, ensuring its sufficient reliability requires an extended time of testing. Researchers are reluctant to accept this requirement because they themselves are subject to the pressure of deadlines and efficiency requirements. Under such circumstances, it is very difficult to avoid mistakes

35 ibid., Chapter 3. 36 An overview of legislation in various countries may be found in the Wikipedia article “Freedom of Information Laws by Country” available at https://en.wikipedia.org/wiki/Freedom_of_informa tion_laws_by_country [2018-03-17]. 37 More information may be found in the Wikipedia article “Freedom of Information Act (United States)” available at https://en.wikipedia.org/wiki/Freedom_of_Information_Act_(United_States) [2018-03-17].

16.4 Technical infrastructure of experimentation

339

that might once question the value of the acquired experimental data, or even lead to violation of the basic principle of engineering ethics: primum non nocere. Because of the insufficient testing of the system, the data from a medical experiment may be overtaken by unauthorised persons, the leakage of radiation from radioactive isotopes may endanger the health of researchers, the artefacts created by the measuring apparatus may be considered as empirical facts and reorient the researchers’ attention to the wrong direction. It is believed that engineering ethics is a more mature example of applied ethics than research ethics, due to the almost immediate availability of information about the effectiveness of technical activities and their direct impact on the quality of individual and social life. For the same reason, unlike in science, professional engineering practices outside of the area of formally confirmed competences are considered unethical (in many areas they are also subject to legal sanctions). Engineering ethics is covered by abundant literature including textbooks38 and codes issued by professional associations39. The most typical dilemmas of engineering ethics are implied by the need to simultaneously minimise risks and broadly understood costs.

Example 16.10: Testing electromechanical devices, installed in a new aircraft, is an extremely expensive undertaking. The first months of testing bring, as a rule, very spectacular results: the most obvious design and fabrication errors are detected and removed; consequently, the so-called mean time between failures of the devices is significantly increased. Each consecutive month of testing does not cost less than the first, but it brings less and less significant outcomes. When can testing be considered complete? In practice, the answer to this question is determined by the industry standards or by the limitation of resources allocated for the aircraft prototyping. However, an engineer with a more sensitive conscience can never be sure whether he has done everything that has been in his capacity to prevent the possible death of passengers.

The ethical analysis of an engineering activity must comprise the risk implied by the uncertainty of predicting its effects, especially long-term effects. This requirement may considerably impede the resolution of cost-risk dilemmas, mainly due to the complexity, dynamics and “opacity” of technological systems and processes of today. The risk resulting from uncertainty in predicting the effects of engineering activities may be mitigated by the following:

38 e.g. G. D. Baura, Engineering Ethics: An Industrial Perspective, Elsevier Academic Press, Burlington – San Diego – London 2006; M. W. Martin, R. Schinzinger, Introduction to Engineering Ethics, 2010 (2nd edition). 39 e.g. Code of Ethics for Engineers, National Society of Professional Engineers (USA), https://www. nspe.org/resources/ethics/code-ethics [2018-06-15]; FEANI Code of Conduct, European Federation of National Engineering Associations, http://ethics.iit.edu/ecodes/node/6410 [2018-06-15].

340

16 Ethical aspects of experimentation

– enriching knowledge about technical objects being the subject of these activities and ways of their exploitation; – increasing the functional redundancy of these objects and the multiplication of safeguards installed; – extending the scope and time of testing; – refraining from the use of tested objects in case of serious doubts. The available resources (money, time, research equipment, qualifications of laboratory staff, etc.), more often than anybody’s ignorance or bad will, determine the limits of the applicability of the above methods of risk mitigation. The most typical violations of engineering ethics are the following: the plagiarism of technical solutions, pushing forward inferior solutions (motivated by personal benefits) and falsification of technical documentation or results of testing. The so-called reverse engineering is an advanced form of plagiarism which consists in copying a technical solution (without the consent of the holder of rights to this solution) including not only the idea of this solution but also the details of its implementation, often without understanding the idea itself40. Example 16.11: At the end of the 1970s, in some of the “communist” countries, attempts were made to copy medium-scale integrated circuits by disclosing successive layers of a semiconductor structure – in order to design a technological process that would enable their production, much less expensive than purchasing original circuits. This was a realistic assumption only because, for political reasons, the exchange rate of the currencies of those countries into US dollars was maintained at an extremely low level.

The most important causes of morally questionable behaviours of engineers employed in research laboratories are similar to the causes of research misconduct of other members of research teams. Perhaps some slight differences may be noted: engineers are more often struggling with management pressure on reducing costs of test equipment. Certain impact may be also attributed to the weakening of ethical standards, historically formed in traditional engineering branches, caused by the blurring of the boundaries between these branches.

16.4.2 Safety aspects of experimentation An important aspect of experimentation, rarely addressed in the textbooks of research ethics, is the safety of researchers. The following examples are to show that it may be associated with nontrivial problems of moral significance.

40 B. Hayes, “Reverse Engineering”, American Scientist, 2006, March–April 2006, pp. 107–111.

16.4 Technical infrastructure of experimentation

341

Example 16.12: An explosion, which took place in 2006 at the École nationale supérieure de chimie de Mulhouse (France), was initiated by the leakage of an ethylene bottle and took the life of an eminent researcher and caused material damages of more than 100 million euros41.

Example 16.13: The 1979 anthrax outbreak in Sverdlovsk (Soviet Union), when anthrax spores were accidentally released from a secret military research laboratory, created panic and cost lives42.

In principle, a measuring system should not significantly affect the properties and behaviour of the system under measurement; therefore, the measurement itself rarely poses an immediate threat to the researcher (a measurement based on the use of nuclear or X-ray radiation is an exception). More often, security issues are associated with the conditions required by measurements or other research-related activities; here are some examples: – proper use of protective clothing and protective equipment; – safe handling of laboratory materials (including their transport and utilisation) and laboratory equipment; – design of laboratories and their equipment, taking into account ergonomics and work safety; – safety-related training and re-training of laboratory personnel.43 Serious threats to researchers can be engendered by experiments involving plants and animals. The bacteria and viruses dangerous to humans and animals have got out the laboratory not only in the Soviet Union, and not only out of the laboratories involved in the development of biological weapons. The lack of diligence in completing procedures related to technical security is an infringement subject to severe ethical judgment.

41 P. Vignal, “Un professeur de chimie jugé pour explosion à Mulhouse”, l’express.fr, No. 2010.09.15, http://www.lexpress.fr/actualites/2/un-professeur-de-chimie-juge-pour-explosion-a-mulhouse_919637. html [2011-03-03]. 42 J. Guillemin, “Scientists and the History of Biological Weapons”, 2006. 43 cf. On Being a Scientist: Responsible Conduct in Research, 2009, p. 28.

17 Ethical aspects of information processing The research activity is aimed at obtaining answers to questions which are important for cognitive or utilitarian reasons, and therefore – new information which has a potential to become scientific knowledge. It seems, therefore, to be logically justified to treat a research process as an information process, i.e. a series of operations which consist in processing of information. The information aspect is particularly pronounced in research activities such as conducting scientific discussions, publishing research results, reviewing scientific articles and applying for research funding. Ethical issues related to these activities will be discussed in the consecutive sections of this chapter.

17.1 Preliminary considerations There is no agreement among philosophers and information experts about the definition of the concept of information, or even about the possibility to provide its satisfactory definition1. There are good reasons to consider this concept as primary (next to mass and energy), and to interpret it through the contexts of its use, especially by means of its logical relations to other basic concepts, in particular – to the concepts of data and knowledge. Example 17.1: To correctly interpret the data contained in the first row of Table 17.1, we have to know that these are the ASCII codes of alphanumeric characters in the hexadecimal notation2. We need, moreover, to know that the sequence of those characters “il papa mori ieri”3 is to be interpreted as a meaningful statement in Italian. The information contained in this statement will become our knowledge if we understand its meaning, and if we can confirm its veracity. Table 17.1: The ASCII codes and the corresponding characters. CODE CHARACTER



C

i

l











p

a

p

a



D

F





m

o

r

i







i

e

  r

i

The epistemological interpretation of the notion of knowledge, provided in Subsection 2.4.2, is not unconditionally accepted in such scientific disciplines

1 L. Floridi, “Semantic Conceptions of Information”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Spring 2017 Edition, https://plato.stanford.edu/archives/spr2017/entries/informa tion-semantic/ [2018-06-15]. 2 The table of all ASCII codes may be found in the Wikipedia article “ASCII” available at https://en. wikipedia.org/wiki/ASCII [2018-06-14]. 3 which means in English: “the father died yesterday”. https://doi.org/10.1515/9783110584066-017

344

17 Ethical aspects of information processing

as linguistics, cognitive science, psychology or machine learning. Even greater diversity of views is concerning the relationship between knowledge and wisdom4. According to a traditional view, information is not yet knowledge, and knowledge is not yet wisdom. A piece of new information becomes knowledge when it is understood and, in some way, integrated into the body of previously acquired knowledge5. Scientific information becomes scientific knowledge only when its veracity is in some way confirmed (cf. Section 9.3) and included in the system of scientific knowledge accepted at a given stage of science development. Thus, the process of transforming scientific information into knowledge includes efforts and contributions of various representatives of a technoscientific community: the achievements of original thinkers, discoverers and inventors, are subject to the assessment of competitors and reviewers, and then are published – first in original research articles, next in review articles and textbooks. In some scientific milieus, there is a widespread conviction about the superiority of scientific achievements which consist in generation of new information – over achievements which consist in its transformation into knowledge. Already, more than half a century ago, the NewZealand physicist John M. Ziman (1925–2005) indicated negative consequences of such preferences by pointing out that without this transformation science is unable to fulfil its social mission6. The research process consists in coordinated processing of several streams of information: a stream of scientific information, a stream of technical information, a stream of financial information, a stream of logistic information and a stream of formal and legal information. The general pattern of coordination is schematically presented in Figure 17.1. Ethical problems, specific to technoscientific research, are most pronounced in the context of the first two streams, but their systematic analysis should be carried out in the context of the other streams. These problems can be divided into three categories resulting from the structure of the information process: – the problems related to obtaining input information, such as theft of information or gaining information in a way that violates personal rights (dignity, health, life); – the problems related to the processing of information, such as fabrication and falsification of intermediate information, or the lack of due diligence in the implementation of information processing procedures;

4 S. Ryan, “Wisdom”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Summer 2018 Edition, https://plato.stanford.edu/archives/sum2018/entries/wisdom/ [2018-08-02]. 5 cf. M. J. Bates, “Information and Knowledge: An Evolutionary Framework for Information Science”, Information Research, 2005, Vol. 10, No. 4, http://informationr.net/ir/10-4/paper239.html [2017-05-11]. 6 J. M. Ziman, “Information, Communication, Knowledge”, Nature, 1969, Vol. 224, No. 5220, pp. 318–324.

17.2 Scientific discussion

345

Input information Information from another stream

Information processing – Stage #1 Information to another stream

Information from another stream

Information processing – Stage #2 Information to another stream

Information from another stream

Information processing – Stage #N Information to another stream

Output information Figure 17.1: Structure of an information process. Source: R. Z. Morawski, “Ethical Aspects of Empirical Research”, Proc. 5th Congress of Metrology (Łódź, Poland, September 6–8, 2010), pp. 28–33.

– the problems related to the transfer of output information, such as abuses related to the presentation of research results at scientific conferences, to the publication of research results in scientific journals or to the preparation of research results for implementation.

17.2 Scientific discussion The discussion is an integral component of the research process, serving several important goals of technoscience: elimination of errors and mistakes, stimulation of creativity, objectivisation of opinions and judgments, formation of beliefs. For these reasons, a working discussion is the “daily bread” of every research team. In the internet age, when traditional seminars and conferences have lost their significance

346

17 Ethical aspects of information processing

as means for quick communication of information, the discussion has become the main motivation for organising them. The discussion is, moreover, a basic form of work of scientific councils. Finally, the discussion on the pages of technoscientific journals is one of the most important aspects of authentic scientific life. For further deliberations over the subject of discussion, it will be useful to distinguish rational discussion from rhetorical discussion. The purpose of the first of them is to jointly solve a problem using arguments satisfying certain logical and ethical standards, while the purpose of the second is to gain an advantage over the opponent using all available means, regardless of their logical and ethical status. The adjective “rhetorical” is derived from the noun “rhetoric”7 which denotes the art of efficient use of speech in order to influence people, as well as related knowledge and practice. The rhetoric developed in ancient Greece where oratory skills played an important role in the public life of democratic city-states. Since the effectiveness of persuasion was the main aim of ancient rhetoric, it focused on the art of finding arguments and elegant use of language, as well as on psychological aspects of persuasion.

17.2.1 Principles of rational discussion Since antiquity, various attempts have been made to formulate or even codify the principles of rational discussion. A new wave of such attempts appeared with the advent of the internet: almost every organiser or moderator of an internet forum is trying to propose a specific model of a more-less rational exchange of views and arguments. Both traditional and modern models of such exchange refer to some common elements, although they differ considerably in the manner of their description. The following six principles are presented and discussed in this subsection: the principle of equal rights, the principle of responsibility, the principle of consensus, the principle of honesty, the principle of relevance, and the principle of benevolence. In terms of substance, this set of principles largely coincides with the list of 13 principles, compiled by the American philosopher T. Edward Damer (*1935)8. The principle of equal rights forbids a disputer: – to prevent the opponent from presenting and justifying his stance; – to use biased language for presenting and justifying his own stance; – to discredit the opponent by questioning his competence, credibility or right to express opinions about the discussed issue;

7 from Greek rhetor = “speaker” or “orator”. 8 T. E. Damer, Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Arguments, Wadswarth (Thomson Learning) Pub., Belmont 2001.

17.2 Scientific discussion

347

– to condition the substance-related conclusions on the formal status of the opponent (as in a historical case of a heretic shoemaker who was first of all recognised as “a heretic”, and a heretic aristocrat who was first of all recognised as “an aristocrat”). The principle of responsibility obliges a disputer: – to be prepared for justifying his opinion or stance; – to refrain from charging the opponent with the responsibility for this justification (e.g. by saying “nobody will be able to convince me that this is true . . .” or “if you disagree, please prove that this is not true . . .”). The principle of consensus obliges the disputer: – to reveal all the premises of argumentation, and to use exclusively the premises accepted by the opponent; – to refrain from referring (especially in a camouflaged way) to the values or authorities which are not accepted by the opponent. The principle of honesty obliges the disputer: – to consider partial conclusions accepted at a certain stage of a discussion as being valid premises for the next stages; – to consider only actual views of the opponent, i.e. his explicit statements and their logical consequences – to refrain from distorting opponent’s statements (opinions), e.g. by unjustified generalisation of specific comments or assertions of the opponent; – to exclusively refer to the arguments which are logically correct (or may become logically correct if supplemented with lacking premises); – to refrain from unclear or ambiguous statements, and to thoroughly interpret statements which are provoking doubts. The principle of relevance obliges the disputer: – to defend his stance only by means of arguments which really support that stance because of logical implication, increase in probability, pertinent analogy, etc.; – to refrain from actions which may deconcentrate the opponent, such as using false implications, providing arguments which are artificially increasing credibility of his stance, referring to doubtful analogies. The principle of benevolence obliges the disputer to interpret the arguments of the opponent under an assumption that he is rational, and to assign them as much rationality as logic allows.

348

17 Ethical aspects of information processing

17.2.2 Fallacious argumentation The principles of rational discussion, presented in the previous subsection, refer to numerous formal and substance-related requirements which belong to a vast body of knowledge about argumentation, accumulated over centuries and systematically presented in the numerous handbooks of argumentation9. In order to properly argue in a technoscientific discussion, one has to be able to recognise arguments which do not meet those requirements, the so-called fallacious arguments, and to avoid them. Their profound analysis may be found in specialised books such as Fallacies and Argument Appraisal10, authored by the Canadian philosopher Christopher W. Tindale (*1953). Having in mind newcomers to the field of logical reasoning, the authors of An Illustrated Book of Bad Arguments11 selected a set of 19 common errors in reasoning and visualised them using rememberable illustrations that are supplemented with lots of examples. The lists of fallacious arguments, available on the internet, contain more than hundred items, e.g. the Wikipedia list contains 134 items classified into six categories12. The subset of fallacious arguments presented here contains only those which most frequently occur in technoscientific discussions. A mistake which probably most often occurs during conferences and seminars consists in referring to an anonymous authority. It should be emphasised that referring to an epistemic authority has a long tradition in science, and in the age of advanced specialisation in science is unavoidable: nobody is able to be an expert outside a narrow area of research, everybody is constrained to use the expertise of other specialists, and to trust them in a certain sense. However, one should do it in a rational way, i.e. one should base the trust on certain premises that might be reasonably justified. Such premises include grades and academic titles of an “authority”, his reputation in the scientific community, the reputation of a research institution he is affiliated to and the reputation of the journals in which his articles are published. It should be noted that – on the one hand – in the times of ubiquitous marketing, the reputation can be a product of skilful promotion and self-promotion, and that – on the other – it is extremely difficult to define and formalise the criteria of reputation. However, regardless of the uncertainty associated with 9 e.g. in: F. H. van Eemeren, E. C. W. Krabbe, A. F. S. Henkemans, J. H. M. Wagemans, Handbook of Argumentation Theory, Springer, Dordrecht 2014; D. Walton, Methods of Argumentation, Cambridge University Press, New York 2013; P. Besnard, A. Hunter, Elements of Argumentation, MIT Press, Cambridge (USA) – London (UK) 2008. 10 C. W. Tindale, Fallacies and Argument Appraisal, Cambridge University Press, Cambridge (UK) 2007. 11 A. Almossawi, A. Giraldo, An Illustrated Book of Bad Arguments, JasperCollins Pub., New York 2014. 12 cf. the Wikipedia article “List of Fallacies” available at https://en.wikipedia.org/wiki/List_of_fal lacies [2018-03-20].

17.2 Scientific discussion

349

the use of these criteria, such statements as “experts agree that . . .” or “specialists in the field . . . argue that . . .”, without mentioning the names or works of those “experts” or “specialists” should be avoided in a scientific discussion. Whenever possible, one should provide the name(s) of the author(s) of the statement referred to or the title of the article in which it can be found along with the corresponding justification. This is particularly important in the case of printed statements: the specification of an “authority” should enable the reader to unambiguously identify him. The paradigm of causality is the basis for the scientific explanation of the phenomena under study (cf. Chapter 7). Special care is, therefore, desirable in formulating statements concerning causal relationships. Here are three of the most common logical errors associated with such statements: – recognising a causal relationship where there is only a succession in time (the fallacy called in Latin post hoc, ergo propter hoc); – recognising a causal relationship where there is only co-occurrence (the fallacy called in Latin cum hoc, ergo propter hoc); – recognising the causative role of a single cause where there are many causes.

Example 17.2: Here are two examples of the error post hoc, ergo propter hoc: – the Roman Empire collapsed after the adoption of Christianity, and therefore Christianity was the cause of its fall; – Y lost his wallet just after the black cat crossed his way; this cat was the cause of the Y’s loss.

The error cum hoc, ergo propter hoc most often occurs when two phenomena (events, characteristics, etc.), A and B, are correlated because they have a common cause C. Example 17.3: A frequently observed correlation between longevity (A) and rare visiting a physician (B) does not necessarily mean that people who rarely seek medical advice live longer, but that those in good health (C) rarely visit a physician and live longer13.

The error cum hoc, ergo propter hoc occur also quite frequently when the observed relationship between A and B is not the effect of a causal dependence of B on A, but of A on B (cf. the logical fallacy of affirming the consequent, described in Subsection 2.2.2).

13 inspired by: K. Szymanek, Sztuka argumentacji – Słownik terminologiczny, Wyd. Nauk. PWN, Warszawa 2004, pp. 72–73.

350

17 Ethical aspects of information processing

Example 17.4: The observation that the scent of rose geranium may be a cause of someone’s headache is not a sufficient premise to gather that, if somebody is suffering a headache, then the geranium flower must be present in the room.

Recognising the causative role of a single cause where there are many causes is underlying many scientific and everyday controversies. Example 17.5: The longevity may be causally attributed to the genetic and environmental factors, to the physical and mental activity, to the diet and other elements of the lifestyle. Very often, however, attempts are made, even in a scientific discourse, to absolutise one of those conditions, e.g. genetic factors. Moreover, the role of such factors as vegetarianism may be considered negative by some disputers, and positive – by others. When a vegetarian person dies because of a rare disease, we are inclined to blame vegetarianism for it; the fact that another vegetarian person lives in good health exceptionally long – also.

Logical fallacies, underlying many jokes, are the source of their power to make us laugh. Example 17.6: It is not difficult to identify the “logical fallacies” underlying the following jokes: – The Soviet experts, who studied the reaction of a flea to cutting its consecutive legs, reported: “after cutting off the first leg, the flea skipped when ordered to skip”, . . ., “after cutting off the last leg, the flea got deaf”. – Money does not bring happiness, but it is good to be unhappy sometimes14.

From ethical point of view, logical fallacies, intentionally introduced in a scientific discussion, are equivalent to lies since, like lies, they may entail many negative (even harmful) consequences, such as wrong research decisions and waste of research resources associated with them, false information disseminated in publications as well as unjust evaluation of the researchers’ performance. If a logical fallacy results “only” from insufficient preparation of a debater for the academic profession (lack of certain intellectual skills), then it may be considered a negligence mistake which is, as a rule, correctable. Some logical fallacies, intentionally introduced into argumentation, may violate the dignity of a debater. The genetic fallacy is one of them; it consists in drawing conclusions based not only on the substantial and logical value of the opponent’s arguments but also on some hypotheses about their possible psychological, religious or social conditioning. In its extreme form, when the substantial and logical value of an argument is almost non-existent, this

14 inspired by: P. Widz, “Nieszczęśliwy”, a cartoon available at http://www.widzo.pl.tl/Galeria/ pic-1000040.htm [2018-05-07].

17.2 Scientific discussion

351

fallacy is called argumentum ad hominem (which means in Latin “argument to the person”). Example 17.7: The fact that Professor A is a narcissistic person does not matter for the scientific evaluation of his research results. The fact that Senior Assistant B was born into a family of thieves is not a proof that he was the one who misappropriated research funds15.

17.2.3 Eristic or art of being right In the age of powerful media, we are permanently exposed to eristic manipulations exercised by politicians, journalists, professionals of public relations and marketing. An hour of political debate on television, coupled with the current opinion poll of viewers, is usually sufficient to note that the one who has the best arguments does not necessarily win: the audience often votes not in favour of those who are right, but in favour of those who are able to convince. This is not a new phenomenon: already 25 centuries ago, the Athenians made political and judicial decisions under the influence of skilled orators who were able to convince them using rhetorical tricks affecting their emotions or aesthetic feelings, or simply misleading. The art of using those tricks, called eristic16, was disapproved already by the greatest ancient philosophers Plato and Aristotle. The intentional use of fallacious arguments is the core category of eristic tricks. The title of this subsection is borrowed from the title of a booklet authored by the German philosopher Arthur Schopenhauer (1788–1860), first published around 183117, and often republished in various languages to this day18. If man were honest by nature, then – according to the author of this booklet – the goal of any discussion would be to seek the truth; the purpose of real-life discussions, however, is to win a debate rather than to seek the truth. Even if we ourselves avoid manipulating others, we should understand the eristic dialectics as to be able to recognise the insidious or perverse argumentation of our opponents, and effectively defend against it. Arthur Schopenhauer identified 38 eristic stratagems, but the lists of them in the

15 J. Baggini, P. S. Fosl, The Philosopher’s Toolkit: A Compendium of Philosophical Concepts and Methods, 2010, Section 3.13. 16 from Greek eristikós = “eager for strife”. 17 A. Schopenhauer, Eristische Dialektik oder Die Kunst, Recht zu behalten Megaphone eBooks, 2008, http://www.wendelberger.com/downloads/Schopenhauer_DE.pdf [2018-10-04]. 18 e.g. A. Schopenhauer, The Art of Being Right (translated from German by T. B. Saunders in 1896), https://en.wikisource.org/wiki/Author:Thomas_Bailey_Saunders [2018-03-20].

352

17 Ethical aspects of information processing

contemporary sources of information on the art of conducting disputes are much longer19. Example 17.8: Here are 16 out of 47 eristic tricks presented in a Polish dictionary of argumentation20. They have been selected according to the incidence of their occurrence in scientific or peri-scientific discussions: – Refer to the opponent’s statements in a possibly general way even if he has formulated them in a very specific way. – Formulate your own statements in a possibly specific way, with numerous constraining assumptions and comments. – Frequently use statements which are very probable per se or evident (such as “The rate of dissatisfied citizens will increase if salaries decrease while inflation and unemployment grow”). – By false implications and slight distortions of opponent’s statements, derive conclusions which do not follow from those statements, but which are convenient for you. – Pretend objectivity and honesty by acknowledging your faults and accepting opponent’s arguments in secondary issues. – Generate insignificant premises in order to hide conclusions you are aiming at. – Quickly pass from one stage of reasoning to another to prevent the opponent from keeping up and from being able to spot weak points of your argumentation. – If the opponent has accepted a specific statement, immediately generalise it and consider as accepted by the opponent in a generalised version. – Knowing that even brilliant arguments are fading if repeated, provoke the opponent to repeat his best arguments. – Make the opponent to prove even the most evident statements and refuse to accept veracity of his premises under any pretence. – When losing a point due to the opponent’s time-and-energy-consuming reasoning, return to this point as frequently as possible as if it has not been sufficiently discussed. – Never agree even on the most evident points proved by the opponent (say “let’s suppose for a moment that this is true . . .”) in order to demonstrate that, being motivated by a good will, you are going to tolerate his imperfect reasoning. – If the opponent is asking you to more precisely and specifically formulate your objections, escape in common generalities, e.g. by referring to human fallibility. – Put special emphasis on any argument that is making the opponent nervous (it is probably related to his weak point). – Ironically pretend incompetence, e.g. by saying “it’s probably true what you are telling me, but I am unable to grasp it”, in order to suggest that the opponent’s statements are nonsensical. – Use the refutation of an argument, provided by the opponent, as a proof of the falseness of his opinion or stance.

19 L. M. Surhone, M. T. Tennoe, S. F. Henssonow (Eds.), Eristic: Dialogue, Eris (mythology), Argument, Truth, Betascript Pub., Riga 2010. 20 K. Szymanek, Sztuka argumentacji – słownik terminologiczny, 2004, pp. 138–141.

17.3 Publication of research results

353

It is not difficult to explain how these eristic tricks violate the principles of rational discussion, and what is the moral evil associated with their use in scientific discussions.

17.3 Publication of research results This section contains methodological and ethical guidelines for publishing results of technoscientific research. Since the volume and importance of unpublished research reports is constantly growing – because of the widespread use of electronic tools for their editing and distributing – it should be assumed that these guidelines also apply to such reports. Careful editing of unpublished research reports has a profound practical importance because it can not only enable the readers of those reports to fully understand them, but also it may help their authors in preparation of related publications.

17.3.1 Content and form of scientific publications According to the Polish logician and philosopher Kazimierz Ajdukiewicz, a scientific statement, especially a published statement, should meet a number of requirements concerning the significance, accuracy and rationality of its content, as well as the competence of its author in the subject-matter. First of all, the scientific statement should concern matters that are important to science and may significantly enrich science; it could be, for example: – a new proof of a known and proven claim, – a clarification or justification of a claim which is known but rather uncertain, – a systematic presentation of known claims, – a statement that cannot be easily derived from existing knowledge, – the formulation of a new problem. Secondly, the scientific statement should be formulated with proper accuracy: both individual sentences and their composition must present certain states of affairs in an unambiguous way; they cannot be just vague assertions subject to various interpretations. Thirdly, the scientific statement should reflect the rational stance of its author with respect to the uncertainty of scientific knowledge: the firmness with which he makes his claims should correspond to the degree of their justification. Fourthly, the scientific statement should not be blemished with blunders revealing the ignorance of its author in the subject-matter.21

21 K. Ajdukiewicz, “O wolności nauki”, 1957.

354

17 Ethical aspects of information processing

The reader of a scientific article trusts its author not only in the matter of the veracity of the description of the research methodology and research results but also in the matter of the reliability of conclusions drawn from those results. It is, therefore, a good publication practice to include, in a scientific article, the sufficiently detailed description of the subject and methodology of a reported study, as to enable an interested reader to replicate this study. The article should contain, in particular, the description of the procedures for acquisition and processing of measurement data, enabling the reviewer and next the reader to assess their correctness and reliability. The requirements in this respect, although may differ from one discipline to another, always include the obligation to accurately and permanently store the source data, as to make possible – when necessary – verification of the final results of study and of the conclusions derived from them, in particular – by an independent researcher interested in replicating the study22. The postulate of sharing data with readers of a scientific article is supported by editorial offices of some journals which request the authors of submitted manuscripts to deposit the data on the editorial server23. It should be, however, noted that repeating experiments, described in scientific journals, is today exceptionally rare for at least two reasons: (1) the editors are reluctant to publish the results of replicated experiments unless they are inconsistent with the results of original experiments; (2) the results of replicated studies, even if methodologically improved, are on the whole less appreciated by research-evaluation systems than the results of brand new studies. The same applies to the negative results of research, i.e. to the results which do not meet initial expectations, in particular – do not confirm a tested hypothesis. Despite historical evidence that negative results may be a driving force of scientific progress, scholars are less inclined to “boast” about them, believing that they are worthless for science; reviewers are suspicious about them; editors are reluctant to publish them. It seems, however, that such attitudes may threaten the objectivity and openness of science24. The most frequent infringements of ethical rules related to publishing are the following: using fabricated or falsified results of experiments, plagiarising the form of expression or ideas from earlier works, presenting research hypotheses as conclusions from research results and exaggerating or faking conclusions. “Fabrication, falsification, and plagiarism are the academic equivalents of lying, cheating, and stealing”25. The fabrication and falsification of data has already been covered in Subsection 16.3.1; the plagiarism of the form of expression, i.e. violation of

22 cf. On Being a Scientist: Responsible Conduct in Research, 2009, pp. 9–11. 23 cf. D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, p. 164. 24 ibid., p. 156. 25 D. S. Kornfeld, “Perspective: Research Misconduct the Search for a Remedy”, Academic Medicine, 2012, Vol. 87, No. 7, pp. 877–882.

17.3 Publication of research results

355

copyright, will be discussed in Section 18.2; here, a few ethical issues related to the plagiarism of ideas are presented and analysed. The term plagiarism26 denotes a form of theft. Plagiarism of ideas is understood as an explicit or presumed presentation of information, taken from an external source, as one’s own and original, i.e. without indicating that source by citation or quotation. The latter two nouns are not synonyms, although their meanings are very similar: – The verb “to cite”27 means to use something someone has said or written, as a validation or proof of what one is saying; it is not reproducing the same words but the idea, principle or theory behind it. – The verb “to quote”28 means to exactly (literally) repeat something someone has said or written, and to provide its source. Plagiarism of ideas (being a violation of ethical principles) can at the same time constitute plagiarism of the form of expression of those ideas (being a violation of copyright) when ideas are copied without changing the form of their expression. The obligation to indicate the source of ideas does not apply to common knowledge such as the Pythagorean theorem or the Newton’s laws of motion; but what about the Maxwell’s equations? The answer to this question depends on the addressees of the publication: there would be rather no need to cite the source of those equations in an article intended for specialists in electrodynamics; but it would be certainly desirable in an article addressed to biologists. The blurred boundaries of common knowledge significantly impede the strict definition of the plagiarism of ideas. The study reported in an article should be presented in the context of earlier contributions to the relevant scientific discipline. The author of this article is, therefore, obliged to find original publications representative of the up-to-date knowledge about the research problem and its broader technoscientific context (justifications, applications). Furthermore, the author should cite sources of scientific information which: – have influenced the nature of the research reported in the article, – may improve the intelligibility of the article, – have been directly used in the research work or in the article, – are supporting conclusions presented in the article, – are contradicting conclusions presented in the article.

26 from Latin plagiare = “to kidnap”. 27 from Latin citare = “to summon” or “to call”. 28 from Latin quotare = “to distinguish by numbers” or “to number chapters”.

356

17 Ethical aspects of information processing

The most frequent mistakes associated with citing sources of scientific information are the following: – the incompleteness of bibliographic data, making it impossible to identify a cited work unequivocally, – the lacking or dubious logical relationship between a cited work and the context where the reference to this work appears, – a disproportionately large share of author’s publications in the total number of cited works, unjustified by the nature of the problem under study, – the lack of the most recent publications on that problem. All those mistakes may be a priori qualified as negligence errors, but closer scrutiny may suggest that the author has committed them intentionally: to hide his ignorance or lack of novelty in the article, to promote his own research contributions from the past or to improve the bibliometric indicators characterising those contributions. Similar motivation may push him to the self-plagiarism of the form of expression or of ideas. Self-plagiarism29 is the use of one’s own previously published material without disclosing its source and without, if legally required, obtaining the consent of the copyright holder. Self-plagiarism is usually not regarded as a serious ethical problem, but as a bad publication practice, which under the US law, however, may lead to copyright infringement30 (cf. Chapter 18). It is today often viewed as an academic misbehaviour because – on the one hand – it can distort citation impact of the author’s publications, and it can skew meta-analyses and review articles – on the other. For these reasons, the vast majority of the editors of scientific journals allow only processing of manuscripts which have not been published before, and which are not subject to parallel processing in any other journal. Many editorial offices even require from the authors to formally declare that the manuscript has not been sent in parallel to other editorial offices. The same article can be published in two different journals only under special circumstances, and only with the consent of the editors of both journals. Such duplicate publication may be, for example, justified by the interdisciplinary nature of the article and the lack of an appropriate interdisciplinary journal; it may be thus motivated by the need to deliver the research results to two different scientific communities. It is also acceptable, and even desirable, to publish in a journal an article being the expanded and updated version of a paper presented during a scientific conference

29 tagged also with such labels as “autoplagiarism”, “duplicate publication”, “multiple publication” or “redundant publication”. 30 N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, 2002, pp. 248–249.

17.3 Publication of research results

357

and published in the proceeding of this conference. Such partially redundant publication of research results is justified by the intention to make available their preliminary version as quickly as possible (at the conference) and their mature version – later (in the journal). Another generally accepted form of re-publishing a conference or journal article is to include it in a book being a collection of monothematic contributions31. In all three cases, the author is obliged to inform the readers of the later version of the article about the existence of its earlier version. While plagiarism and self-plagiarism are extensively discussed in the literature devoted to the ethics of publishing, there are relatively few sources providing analyses and requirements concerning that part of a scientific article which contains conclusions drawn from the reported research results. This is, after all, an issue of exceptional importance in the age of the overabundance of scientific information, when we are unable to read all the articles which are relevant to our research, and we must choose those most relevant on the basis of information provided in their abstracts and conclusion sections. The number of readers of conclusions is, therefore, incomparably greater than the number of readers of the rest of the articles, which means – on grounds of consequential ethics – that our responsibility for the content of conclusions is greater than for other contents of our article. Paradoxically, the errors of argumentation and reasoning, discussed in the previous subsection, occur here more frequently than elsewhere. Unfortunately, these are not only negligence errors but also deliberate manipulations aimed at increasing the chances of a manuscript to be accepted for publication, and to be recognised by the readers when published. The exaggerated and faked conclusions are most commonly used for this purpose; as a rule, they may be detected only after reading the whole article, and only if it contains enough details. Since generalisation is the main goal of every mature scientific discipline, the generalisations unjustified by empirical data belong to the most common abuses of this type. The category of negligence errors includes linguistic and editorial faults. Some of those faults, however, are committed intentionally; then they deserve a more severe ethical appraisal than negligence errors. This is, in particular, insertion of excessive redundancy, and the use of rhetorical tricks and “euphemisms”. The use of the latter in scientific publications has a long tradition, and it has been the subject of many jokes ridiculing the scientific community. Most frequently, it is intended to mask methodological failures by means of misleading terminology and phraseology.

31 ibid.

358

17 Ethical aspects of information processing

Example 17.9: The table below contains several samples of the euphemistic language of scientific publications32: Euphemism

Interpretation

“it is known since a long time”

“I have not searched the relevant literature”

“it is generally known”

“some researchers say”

“it is impossible to provide definitive answers to those questions, but . . .”

“experiments failed, but I hope to be able to publish their results”

“three objects have been selected for experimentation“

“experiments with other objects of the same class failed”

“typical results are presented in the table”

“the best results obtained are presented in the table”

“for the time being, no theory has been developed that would provide an interpretation of the presented results”

“I have no idea how to approach this issue”

The use of euphemisms is effectively supported by the use of a scientific jargon specific to a narrow subfield of study. The aim of both is, as a rule, to exaggerate the significance of a given contribution33, by impressing unprepared readers. A huge amount of scientific banality is produced in this way.

17.3.2 Authorship of scientific publications Since the seventeenth century, a person who first published an idea is considered to be its author rather than a person who first created it. This unwritten convention was adopted ca. 350 years ago by the secretary of the Royal Society of London, Henry Oldenburg (1619–1677), who also introduced the practice of reviewing manuscripts, intended for publication, by independent experts. This convention contributed to the rapid dissemination of research results because their publication meant that they could be used in further research and practical applications, as well as in other publications, provided the author of these results would be honoured with an appropriate citation or quotation34. The role of citing and quoting in scientific activity is to reliably present the achievements, especially views, of other scientists.

32 R. Z. Morawski, “Ethical Aspects of Empirical Research”, Proc. 5th Congress of Metrology (Łódź, Poland, September 6–8, 2010). 33 J. M. Ziman, “Information, Communication, Knowledge”, 1969. 34 On Being a Scientist: Responsible Conduct in Research, 2009, p. 29.

17.3 Publication of research results

359

Initially, the authorship of a scientific publication had only a moral significance; however, with the development of copyright in the nineteenth century, it also acquired material significance, especially for the author himself. In the second half of the twentieth century, there appeared another factor systematically increasing the importance of the authorship of publications, viz. the development of research funding systems which refer to such indicators of the researchers’ performance as the number of publications and the number of citations. Consequently, the importance of the guidelines for establishing the authorship of scientific publications considerably increased. At the end of the twentieth century, it increased even more because of the disclosure of many instances of misconduct in this area – on the one hand – and because of the growing organisational complexity of research projects, followed by the diversification of the roles of researchers in their implementation – on the other. As a consequence, scientific institutions and associations, as well as editors of scientific journals and research funding agencies began to introduce more and more formalised rules for establishing the authorship35. There is today a widespread agreement that the authorship of a scientific article should not be granted to someone who has not made a significant experimental, technical or theoretical contribution to the study whose results are prepared for publication, but it should be granted to someone who designed and conducted a key experiment, and then interpreted its result36. According to the highest ethical standards, the authorship is due only to the researchers who have directly and significantly contributed to the reported study, and who are able to assume responsibility for all the statements contained in the publication. Thus, the authorship of a scientific article is due only to a person who has made a significant intellectual contribution to its contents and to the process of its final editing, who knows the article in detail, and who is able to defend all the statements contained in it. This principle is protecting the authors’ team against possible negative consequences of the situation when an error is detected, and one of the authors denies his responsibility for this error 37. Assuming the responsibility for the whole article by each of the authors can be difficult in the case of interdisciplinary research. An acceptable formal solution of this problem is a note, usually a footnote on the first page of the article, briefly characterising the scope of responsibility of each of the authors. A substance-related solution, sometimes applied, consists in asking trusted colleagues to read and check parts of the article with respect to which an author feels insufficiently competent 38.

35 36 37 38

F. L. Macrina, “Authorship and Peer Review”, 2005 (3rd edition), pp. 61–62. ibid., p. 63. A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), pp. 128–130. On Being a Scientist: Responsible Conduct in Research, 2009, p. 37.

360

17 Ethical aspects of information processing

The above-outlined ethical standards regarding authorship are to a greater or lesser extent implemented by various scientific communities. They are completely ignored by some of them, e.g. publications reporting the results of research projects, carried out under the auspices of the European Organization for Nuclear Research (CERN), are authored by all the participants of those projects 39. On the other hand, many collective works are published which are signed only with the name of a company, institution or association, or just with the name of the head of the authors’ team. This last solution dispels at least an impression that nobody is responsible for the contents and quality of the work. Because of the constantly increasing complexity of the “rules of the game” regarding authorship, more and more journals provide guidelines for potential authors, concerning the principles for determining the authorship of an article and the related requirements concerning the authors’ responsibility for its content; sometimes – also the specification of the role of individual authors in the preparation of the manuscript. Almost without exception, the authors are requested to transfer the copyrights to the publisher of the journal; this can be done, on behalf of all authors, by a person indicated as the corresponding author. The authors are requested by some editorial offices to provide permissions for the reuse of already copyrighted materials40. Some editorial offices require, moreover, the submission of a written consent for the use of unpublished information, expressed by its authors who would prefer to avoid direct citation of this information before its publication41. The editorial offices of the majority of biomedical journals ask the authors of articles, reporting experiments including the use of research materials (cells, microorganisms, reagents, etc.), to declare their readiness to make available those materials to the interested readers – usually for a fee and in limited quantities42. Many editorial offices ask the authors to appoint one of them to be their representative in the contacts with the office. Regardless of the name (senior author, primary author, corresponding author, etc.), the role of such a person may be limited to the correspondence with the office and coordination of the communication among the authors, or it may include also some responsibilities such as the specification of the contribution of individual authors, the preparation of the responses to reviews, and even the full responsibility for the content of the article43.

39 For example, the list of the authors of the following paper (more than 5000 names) takes 25 out of 33 pages: G. Aad, et al., “Combined Measurement of the Higgs Boson Mass in pp Collisions at sqrt(s) = 7 and 8 TeV with the ATLAS and CMS Experiments”, Physical Review Letters, 2015, Vol. 114, pp. 1–33 of the paper #191803. 40 cf. F. L. Macrina, “Authorship and Peer Review”, 2005 (3rd edition), p. 65. 41 ibid., p. 67. 42 ibid. 43 ibid., pp. 69–70.

17.3 Publication of research results

361

The codes of good publication practices appear not only on the websites of editorial offices but also on the websites of research institutions and technoscientific associations. Example 17.10: The International Committee of Medical Journal Editors (ICMJE) has issued a set of recommendations concerning manuscripts submitted to biomedical journals44. It includes the following recommendations concerning the authorship: – “The ICMJE recommends that authorship be based on the following 4 criteria: 1. Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND 2. Drafting the work or revising it critically for important intellectual content; AND 3. Final approval of the version to be published; AND 4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. – In addition to being accountable for the parts of the work he or she has done, an author should be able to identify which co-authors are responsible for specific other parts of the work. In addition, authors should have confidence in the integrity of the contributions of their co-authors. [. . .] all individuals who meet the first criterion should have the opportunity to participate in the review drafting, and final approval of the manuscript.” – “Contributors who meet fewer than all 4 of the above criteria for authorship should not be listed as authors, but they should be acknowledged. Examples of activities that alone (without other contributions) do not qualify a contributor for authorship are acquisition of funding; general supervision of a research group or general administrative support; and writing assistance, technical editing, language editing, and proofreading. Those whose contributions do not justify authorship may be acknowledged individually or together as a group under a single heading [. . .]”.45

The lists of authors are ordered according to various criteria depending on the scientific discipline. Over the past 50 years, the average number of authors per article has increased significantly, and the share of articles with the alphabetical list of authors decreased almost to zero. Increasingly, the authors’ names are ordered according to the importance of their contribution to the preparation of the article or to the research work whose results are reported there46. Some scientific communities dedicate the first or the last place on the list of authors to the head of the research work whose results are reported in the article47.

44 Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals, International Committee of Medical Journal Editors (ICMJE), December 2017 (update), http://www.icmje.org/icmje-recommendations.pdf [2018-04-03]. 45 The list of journals stating that they follow the ICMJE Recommendations is provided at the address: http://www.icmje.org/journals-following-the-icmje-recommendations/ [2018-09-29]. 46 F. L. Macrina, “Authorship and Peer Review”, 2005 (3rd edition), p. 71. 47 On Being a Scientist: Responsible Conduct in Research, 2009, p. 35.

362

17 Ethical aspects of information processing

The co-authorship of a scientific publication is the highest form of the recognition of someone’s contribution to the preparation of this publication and to the research work it concerns. A substance-related contribution of lesser importance, as well as formal or technical contribution, may be credited with a statement of acknowledgment, with a reference to a relevant (published or unpublished) document (also on the internet) or with a reference to a spoken statement (usually called private communication or personal communication). It should be noted that any form of the recognition of someone’s contribution to the publication requires his consent. Already 50 years ago, John Ziman wrote that because of the great impact of private contacts on the progress of science, the reference to someone’s unpublished document or spoken statement is more valued by many researchers than a citation of their publications48. This observation has probably become obsolete in the age of formalised systems for evaluation of scientific achievements and scientific institutions, entirely relying on bibliometric indicators. Neither accomplishments related to research organisation (acquisition of funding, general supervision of a research group and general administrative support), nor editorial accomplishments (writing assistance, technical editing, language editing and proofreading) are sufficient reasons for appearing on the list of authors of a scientific publication; they are, however, sufficient for being included in the statement of acknowledgment49. Both undervaluation and overvaluation of someone’s contribution to the process of producing and publishing original information may imply a certain kind of explicit or implicit plagiarism. Here are the most common, morally doubtful practices related to the composition of the list of authors of a scientific publication: – gift authorship, i.e. offering a place on the list of the authors to a person not (sufficiently) involved in the research work whose results are published, most frequently with the intention to “multiply” this person’s publication score; – honorary authorship, i.e. offering a place on the list of the authors to a person not (sufficiently) involved in the research work whose results are published, but who “deserves” gratitude, with the intention to strengthen the scientific position of this person as the head of a research group, laboratory or institute; – prestige authorship, i.e. offering a place on the list of the authors to a person of prestige in order to increase the attractiveness of the publication;

48 J. M. Ziman, “Information, Communication, Knowledge”, 1969. 49 F. L. Macrina, “Authorship and Peer Review”, 2005 (3rd edition), p. 73.

17.3 Publication of research results

363

– ghost authorship, i.e. entrusting the editorial work of a publication to a professional writer, whose name is not to appear on the list of authors, in order to ensure high literary quality of the publication.50 Special attention must be paid when the list of candidates for the potential co-authorship includes a researcher and a project sponsor, a researcher and his boss, a researcher and a subcontractor, a Ph.D. student and his advisor, as well as members of an interdisciplinary or virtual research team.

17.3.3 Publication policy and its ethical implications It is a good publishing practice to include in the scientific article a sufficiently detailed description of an experiment, so that the interested reader can repeat it. Although this practice is of key importance for scientific progress, since it serves the purpose of eliminating errors and frauds, it is more and more often impossible: the demand for pages devoted to the description of experiments increases in proportion to their complexity, while the journals are inclined to limit the number of pages per article to augment the number of articles per issue or per volume. A good way to compensate for the lack of space in scientific journals is to publish a full research report on the internet and to insert a link to this report in the related article. Many journals, however, regard this practice as publishing full research outcomes, and refuse to “re-publish” them partially in the form of a scientific article; so, before displaying the research results on the internet, one should make sure that this move will not block their publication in a regular journal51. Some research projects generate outcomes whose full disclosure is contraindicated for ethical reasons, i.e. because of their possible use which may threaten public security, public health, agriculture, wild plants and animals or broadly understood natural environment52. There are numerous examples of such projects among those involving manipulations on genetic material. Publishing their results in accordance with the requirement of reproducibility is problematic from ethical point of view because their full disclosure may serve not only to their verification but also to dangerous activities, contrary to the intentions of the researchers. So, any resolution of this dilemma will be detrimental either to ethical or to methodological values.

50 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), pp. 124–125. 51 On Being a Scientist: Responsible Conduct in Research, 2009, p. 32. 52 cf. ibid., p. 34.

364

17 Ethical aspects of information processing

Today, in the age of massive acquisition and storage of data, performed by both academic and non-academic institutions, the hypothesis-driven research is evolving, in many areas, towards data-driven research: automatic or semi-automatic exploration of data (data mining) is becoming a more-and-more important method for finding new hypothetical relationships among events, phenomena or processes. Since access to data is a critical factor of research productivity, the researchers who have invested time and money in creating a new database are reluctant to share it with other scholars who have not made such an investment. The reason is the lack of trust, motivated by a high probability of unfair competition: a larger research group, being better equipped with research infrastructure, may be tempted to use that database for accelerating the progress of their research work, and to publish research results before the owners of the database do it. The time-limited exclusivity to use the database for publication purposes, guaranteed by a legal contract to its owner, seems to be a partial remedy for this problem.53 The development of scientific journals in the eighteenth and nineteenth centuries, accompanied by the maturing of legal instruments for protection of intellectual property, fostered the openness of science by broadening public accessibility of the documents containing research results which previously had been often kept secret54. Although the total volume of published materials is still growing steadily (as well as the number of scientific journals), technoscience is getting less open because of the growing reluctance of researchers to publish important research results, especially those having high potential for being applied. This is an unquestionable social loss, but in the case of industrial research it does not entail any negative consequences because enterprises are not obliged by law to publish the results of their research55. In the case of patentable research results, the abstention from publishing them is justified by patenting rules that preclude the patentability of an invention already published. Moral doubts, however, appear when the prospects for patenting prevent a researcher from publishing some newly obtained results56. In the business atmosphere, permeating scientific institutions, we are prone to forget that the publication of research results is a sine qua non condition for recognising them as scientific.

53 54 55 56

D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, p. 165. ibid., p. 100. ibid., p. 155. On Being a Scientist: Responsible Conduct in Research, 2009, p. 39.

17.3 Publication of research results

365

Example 17.11: Parship is an international online dating service57, whose activity is based on the matchmaking method developed by a retired Professor of Psychology from Hamburg (Fakultät für Erziehungswissenschaft, Psychologie und Bewegungswissenschaft, Universität Hamburg). This method consists, generally speaking, in comparing the personalities of potential partners on the basis of the results of the scientific compatibility test (as it is called by Parship58). However, the Swiss radio DRS2 revealed the opinions of American psychologists and sociologists that there is no single scientific publication where the matchmaking method used by Parship would be described and empirical evidence for its effectiveness would be provided59. The company has, of course, the right to preserve a trade secret; only the abuse of the adjective “scientific” is morally problematic here.

In the research institutions whose activity is evaluated on the basis of the quantity and quality of publications, a stratagem – which consists in publishing research results without significant details of the research procedure used for their generation – is increasingly applied. The reason for their hiding is fear – unfortunately, too often justified by bad experience – that new ideas can be stolen by the participants of the process of qualifying a manuscript for publication60. The evaluation of research institutions on the basis of their publication output also entails some other malpractices aimed at artificial amplification of that output. Such practices include: – exchange of places on the lists of authors of publications, – “strategic” fragmentation of research results to be published, – duplicate publication of the same research results, – mutual (substantially unjustified) citation, – other tricks aimed at artificial increase in the number of citations. The latter category includes the editors’ practice which consists in exerting pressure on the authors of manuscripts submitted to a journal, aimed at making them cite as much as possible the articles published in this journal. It also includes the distribution of already published articles, among the specialists potentially interested in the topics covered by those articles, with a “kind” request for citing them. The multiple publishing of the same research results is not only motivated by the desire to increase individual bibliometric indicators – which are taken into account in the procedures of academic promotion and research funding – but also by the intention to attract the attention of potential research sponsors (especially pharmaceutical

57 whose website is available at http://www.parship.com/ [2018-04-03]. 58 on its UK website located at https://uk.parship.com/ [2018-04-03]. 59 P. Biber, Die berechnete Liebe, Radio SRF2 Kultur (Podcast) Sendung von 4. Mai 2011, https:// www.srf.ch/sendungen/kontext/die-berechnete-liebe [2018-05-05]. 60 D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, p. 101.

366

17 Ethical aspects of information processing

companies) which are interested in multiplication of publications containing positive opinions about their products, expressed by “independent” scholars61. Example 17.12: Is it better to publish a series of small articles, dealing with selected details of a research project under implementation, or rather – after completion of this project – a comprehensive article presenting the whole research topic together with cognitive and practical conclusions drawn from the final research findings? This is a real moral dilemma since the first solution is usually better for the researchers’ careers, the second – for science. Who is to blame for the epidemic, both individual and institutional, preference for the first solution?

It seems that in recent decades, the awareness of pathologies – entailed by the systems for assessing scientific achievements, relying predominantly on bibliometric indicators – has grown considerably62. Those systems almost entirely ignore the quality of publications, as measured by epistemic criteria. An article with an evident mistake can have thousands of citations if it falsely suggests solving an important or fashionable problem which attracts attention of many researchers. Being aware of this weakness of those systems, some research institutions limit the number of publications which are taken into account in the evaluation of the candidates for employment or promotion; some agencies that finance research do the same when evaluating grant applications63. In summary, the following factors, already mentioned in this section, may be indicated as the main driving forces of publication-related misconduct: – the rules of research financing and researchers’ promotion (the syndrome “publish or perish”), – the researchers’ ignorance concerning research methodology, – the researchers’ ignorance concerning ethical aspects of publication practice, – a general decline of linguistic culture in society. In addition, many cases of publishing falsified or fabricated data may be associated with financial factors. It would be difficult to establish a direct causal relationship between an unethical behaviour of a scholar and his financial involvement in a research project; it seems, however, that financial conditions may cause such ethically problematic irregularities as biased interpretation of data (in favour of the promoted hypothesis or in favour of a product which is manufactured by the research sponsor), defective attribution of the authorship

61 ibid., p. 160. 62 M. Binswanger, Sinnlose Wettbewerbe – Warum wir immer mehr Unsinn produzieren, Herder Verlag, Freiburg im Breisgau 2010, Kapitel 7. 63 On Being a Scientist: Responsible Conduct in Research, 2009, p. 33.

17.4 Reviewing process

367

of scientific publications or patents, redundant publication of the same research results or blocking the publication of certain research results64.

17.4 Reviewing process 17.4.1 General principles The term review applies in science to a formal assessment of a document (a work) with the intention to change it (if necessary) or to make a decision based on its content. Not only publications (articles and books) but also many other documents are subject to reviewing: B.Sc. projects, M.Sc. and Ph.D. theses, grant applications, as well as applications concerning employment or promotion of researchers. In each case, the reviewer is directly responsible for issuing a competent and reliable opinion, but not for the decision taken on the basis thereof. Indirectly, however, he is also responsible for the quality of publications, the value of academic degrees, the competence of academic staff, as well as for the advancement of scientific research. The term peer review applies in science to the evaluation of a document (or work) by a person (or persons) of the similar level of competence to that of the author (or authors) of the document (or the producer of the work). This is a form of organised criticism, built into the system of technoscience, without which it cannot function. The following examples are indicating two sensitive areas of the reviewer’s responsibility: – A single negative statement in the review of a grant application may entail the squandering of a valuable research project. – The failure to detect a serious defect in a reviewed manuscript may cause littering the infosphere with a non-sense or dangerous piece of information. A reviewer, like a judge, should be competent, independent, impartial, just and honest. The reviewer is also expected to maintain confidentiality of reviews, to respect intellectual property, to disclose possible conflicts of interest and to be professional both in formal and technical matters related to peer reviewing. This latter expectation refers not only to the ability to reliably use an online peerreview system but also to the ability to express opinions in a communicative and polite way, without resorting to personal attacks and phrases widely recognised as obscene.

64 cf. D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, pp. 91–103.

368

17 Ethical aspects of information processing

Example 17.13: An undisclosed conflict of interest may be a source of the following morally problematic behaviour of a reviewer: he turns out to be very “tolerant” with respect to weak points of a manuscript containing positive opinions about the products of a company he is the co-owner of, and hypercritical with respect to another manuscript in which positive opinions about the competitive products are expressed65.

Not only the lack of substance-related competence and an actual or potential conflict of interest but also the lack of time may be a contraindication for undertaking a review task. There are, however, some dilemmatic situations when those reasons for declining the invitation for reviewing a manuscript should give priority to the arguments in favour of accepting it. For example: – The refusal of an expert, possessing rare competences regarding the subject-matter of the manuscript, may entail the appointment of an incompetent reviewer. – The refusal of a person having a reasonable presumption of irregularities related to the research reported in the manuscript (e.g. concerning falsification of data) may entail publication of the fraudulent results.

17.4.2 Editorial practices The standard criteria for evaluation of a manuscript include: – the compliance of its subject with the profile of a journal it has been submitted to, – the originality and significance of research reported in the manuscript, – the proper characterisation of the state of the art in the relevant domain of knowledge, – the correctness of experimental methodology and infrastructure, – the quality of argumentation used in the justification of the research aims and methodology, – the quality of argumentation used in the discussion of research results and in the conclusions derived from them, – the formal (logical, linguistic, graphical, bibliographic, etc.) correctness of the manuscript.66 Numerous examples of more detailed recommendations for reviewers can be found on the internet websites of the editorial offices of scientific journals67. According to

65 cf. S. G. Bradley, “Managing Competing Interests”, 2005, p. 171. 66 cf. A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), p. 144. 67 e.g. the guidelines for reviewers of ca. 2000 journals published by Elsevier are available at https://www.elsevier.com/reviewers/how-to-conduct-a-review [2018-05-10].

17.4 Reviewing process

369

a 2013 meta-analysis of 13 independent studies68, the main reasons of manuscript rejection are the following: – The research results are lacking originality (novelty) or significance (e.g. are not generalisable, are obsolete because of the advent of new technologies or techniques, are replicating already published findings without adding any substantial knowledge; coincide with already available knowledge, but are claimed to be novel because of the obvious extension on a new area of application; have no implications of theoretical or practical significance, or are obvious for a reader “skilful in the art”). – The manuscript does not match the journal’s profile (e.g. its content lies outside the scope of the journal, or may be of interest only to a very narrow or very specialised group of the journal’s readers). – The design of the reported study is methodologically deficient (e.g. the research problem is poorly formulated, the assumed approach to this problem is conceptually questionable, the methods used for solving it are unsuitable or unreliable, the instrumentation used for experimentation is inappropriate or unreliable, the programme of data acquisition does not guarantee the sufficiently high level of their completeness and the sufficiently low level of their uncertainty or the statistical methods used for data analysis and interpretation are not adequate or misused). – The manuscript is structurally deficient (e.g. the introduction does not establish the background of the reported study, and the rationale behind it is not sufficiently explained; the study itself is not placed in a broader context, the review of pertinent literature is insufficient, the description of the research methodology is not precise enough, the discussion does not provide any interpretation of research results or the conclusions do not appear to be supported by research results). – The manuscript is inadequately prepared in terms of vocabulary, grammar, style or composition. Interestingly, the above long list does not explicitly address any ethical reason of the manuscript rejection, such as plagiarism or falsification of data. A closer analysis of this list, however, enables one to guess that behind each of the pragmatic or methodological reasons mentioned there may be hidden an ethical issue of lesser importance, such as negligence or lack of diligence in performing research operations.

68 “Most Common Reasons for Journal Rejection”, Editage Insights, November 12, 2013, https:// www.editage.com/insights/most-common-reasons-for-journal-rejection [2018-05-05].

370

17 Ethical aspects of information processing

The reviewer of a manuscript is to help both the editorial team – in making the right decision about the further steps in processing the manuscript – and its author(s) – in improving the final form and contents of the manuscript; he should be, therefore, both critical and kind, insightful and constructive. The idle polemics, focused on demonstrating reviewer’s own intellectual superiority, are ethically reprehensible as a mere waste of time and material resources69. When the opinions of independent reviewers are consistent and unambiguous, the editor usually makes the decision following these opinions, i.e. the manuscript is accepted for publication unconditionally, or it is accepted on condition that the authors are ready to correct and modify it in line with the reviewers’ comments and suggestions, or it is rejected. When the reviewers’ opinions are divergent or ambiguous, the editor appoints an additional reviewer, or makes a decision based on his own understanding and interpretation of those ambiguous opinions; in the latter case, he also serves as an auxiliary reviewer. Example 17.14: A conflict of interest can be the source of the following, morally doubtful, action of the editor of a scientific journal (or a member of its editorial board): he delays the publication of an article to gain time and submit a proposal for funding research on the problem solved in this article – before its appearance70.

The reviewer of a manuscript remains generally anonymous to its author. This arrangement protects the reviewer against possible repercussions of issuing a negative opinion, but it also creates a sense of “impunity” which may open the door to his morally problematic behaviours whose negative follow-ups can be next neutralised only by a diligent editorial supervision. The opinion of the reviewer is often influenced by substance-unrelated factors, such as the author’s authority or the reputation of the institution he is affiliated at. It happens that the reviewer criticises a manuscript unjustly because he does not like one of its authors or his methodological stance; on the other hand, the editor may choose such reviewers if he also does not like the author for personal or ideological reasons71. At first glance, a system of double-blind review may seem to mitigate the influence of those factors since in this system not only the author does not know the name (and affiliation) of the reviewer, but also the reviewer does not know the name (and affiliation) of the author. It turns out, however, that in practice the benefits of using this system may be problematic because it may be impossible to hide the authors of the

69 cf. F. L. Macrina, “Authorship and Peer Review”, 2005 (3rd edition), p. 76. 70 cf. S. G. Bradley, “Managing Competing Interests”, 2005, p. 171. 71 N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, 2002, pp. 244–245.

17.4 Reviewing process

371

manuscript without hiding the references, while evaluation of the quality of references is an important part of any review. The editorial practices have been currently undergoing significant changes because more and more scientific journals are published in the electronic form; more and more abstracts of research articles are available, free of charge, on the websites of the editorial offices and in the databases offered by specialised services; some of them offer also, free of charge, the full texts of all or selected research articles72.

Example 17.15: The scholarly research database IEEE Xplore73 indexes abstracts and provides fulltext articles and papers on computer science, electrical engineering and electronics. This database mainly covers publications which appear under the auspices of the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET). The total number of documents offered by the IEEE Xplore is currently exceeding 4.5 million. The abstracts of those documents are available free of charge, while the price of a full-text article is 15 USD for IEEE Members and 33 USD for non-Members (as of June 2018); of course, this price may be significantly lower when covered within an institutional subscription.

The recent changes in the publication business do not only make easier and broader the access to scientific sources, but also they impact the working style of editors, reviewers and authors, e.g. the deadlines for delivery of reviews are shortened, and the expectations regarding the scope of literature, cited by the authors of manuscripts, are growing. The most revolutionary aftermath of those changes is, however, the appearance and rapid expansion of the open-access journals in which the costs of publishing are covered by the authors or by the institutions they are affiliated at (cf. Section 18.5 for more details).

17.4.3 Decline of scientific criticism Reviewing-related misconduct includes plagiarism of ideas and unjust or unjustified or substance-lacking assessment of the manuscripts or other reviewed documents. It seems that the latter two weak points of the peer-review system occur

72 An extensive list of academic databases and search engines is available in Wikipedia at https:// en.wikipedia.org/wiki/List_of_academic_databases_and_search_engines [2018-04-07]. 73 available at http://ieeexplore.ieee.org/Xplore/home.jsp [2018-06-22].

372

17 Ethical aspects of information processing

more and more often, and this is a manifestation of the decline of scientific criticism. As Maciej W. Grabski (1934–2016), the establisher of the Foundation for Polish Science, has put it: “The general tendency of recent years is the disappearance of scientific criticism [. . .] as well as widespread acceptance of mediocrity and riskfree self-confinement to insignificant contributions”74. Scientific criticism is dying because: – Its substantial results are not (and probably cannot be) taken into account in the formal evaluation of scientific achievements, being an integral part of the procedures related to academic promotion or competitive allocation of research funds. – As a consequence of the integration of science with business and politics, intellectual competition has been step-by-step replaced with market competition. – The struggle for survival in technoscience is based on strategies combining competition with cooperation, characteristic of political life. – The methodological and ethical awareness of successive generations of researchers is getting lower and lower. The decay of real scientific criticism is affecting even the procedures of awarding scientific degrees. Positive but grossly superficial reviews of doctoral dissertations appear more and more frequently for various reasons. The most common among them are: the reviewer’s incompetence, his laziness coupled with deficient sense of responsibility, his awareness of institutional pressures and of mutual dependence of the members of a scientific community, his fear of being accused of low motives and his need to manifest personal benevolence. The peer-review system seems also to fail more and more frequently. The following are its most flagrant dysfunctionalities: the ineffectiveness of reviewers in detecting even quite obvious errors and gross discrepancies between opinions of reviewers, caused by the lack of their accuracy and objectivity.

Example 17.16: An article accepted for publication was intentionally corrupted with eight serious errors and sent to 200 reviewers. None of them detected all errors; the average for 200 reviewers was two errors per capita.75

74 M. W. Grabski, “Lepperyzacja nauki”, Zeszyty Towarzystwa Popierania i Krzewienia Nauk, 2005, No. 43, pp. 65–78. 75 N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, 2002, pp. 244–245.

17.4 Reviewing process

373

Example 17.17: A spectacular provocation, showing weak points of the peer-review system in humanities and social sciences, is described in the book Fashionable Nonsense. . .76. One of its authors submitted a manuscript entitled “Transgressing the boundaries: toward a transformative hermeneutics of quantum gravity” to the editorial office of the prestigious American journal Social Text. This manuscript, despite being full of absurdities and conclusions not resulting from premises, was published in a 1996 special issue devoted to the response to the criticism of postmodernism and social constructivism, made by several outstanding scholars77. At that time, the repetition of such a provocation in empirical sciences seemed to be impossible; 20 years later, this conviction was painfully compromised. In 2013, the science journalist John Bohannon submitted a fake scientific manuscript to 304 open-access journals. He described there the anticancer properties of a chemical substance extracted from a lichen. Despite “grave and obvious scientific flaws”, 60% of the journals accepted the manuscript for publication78.

The replicability crisis is another symptom of the weakness of the peer-review system. The evidence has been growing, at least since the beginning of the twenty-first century, that the results of many scientific studies are difficult or impossible to replicate or reproduce on subsequent attempts made either by independent researchers or by the researchers who first published them. The highest incidence of such situation is detected in psychology and life sciences. Example 17.18: In 2015, a group of 270 collaborators, led by Brian Nosek, published the results of a large-scale experiment aimed at replication of 100 psychological studies presented in high-impact journals. They reported that only 39 of the 100 attempts were successful, which means that 61% of the original results could not be fully replicated79.

The peer-review system has many drawbacks, but just – like democracy – it is still better than any alternative solution. These drawbacks include the tendency to eliminate articles containing significantly new or controversial ideas. The editorial offices of some journals (e.g. Behavioral and Brain Sciences) try to mitigate that effect by publishing, together with each article, the corresponding reviewers’ comments80. It is an expensive solution (because of the cost of additional pages), but it effectively activates the sense of responsibility of the reviewers for their opinions81.

76 A. Sokal, J. Bricmont, Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science, Picador Pub., New York 1998. 77 A. D. Sokal, “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity”, Social Text, 1996, Vol. 46/47, pp. 217–252. 78 J. Bohannon, “Who’s Afraid of Peer Review?” Science, 2013, Vol. 342, No. 6154, pp. 60–65. 79 Open Science Collaboration, “Estimating the Reproducibility of Psychological Science”, Science, 2015, Vol. 349, No. 6251, pp. 943–951. 80 N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, 2002, pp. 244–245. 81 ibid.

374

17 Ethical aspects of information processing

Another drawback of the peer-review system, which appears in the journals of high selectivity, is a kind of wastage of good articles: the worst article accepted may be only slightly worse from the best rejected, while both may differ very much from all articles published in a competitive journal with lower selectivity; moreover, the need to review a very large number of submitted manuscripts may delay the publication of the best ones82. The lack of objectivity and insightfulness can also lead to blocking the publication of research results that are incompatible with generally accepted paradigms of technoscience, even when – as it turns out later – their contents are of key importance for technoscientific progress. The development of internet journals, which select manuscripts for publication in a more-less traditional way, seems to alleviate this problem. It is, however, also increasing the number of articles which are reasonably rejected by the editorial offices of some journals and next accepted by others; the latter phenomenon must provoke some doubts of both epistemic and ethical nature.

17.5 Research grant applications The following criteria are usually taken into account, by the agencies responsible for allocation of research funds, in the procedures they use for evaluation of grant applications: – the technoscientific significance of a proposed project, – the research methodology to be implemented in the project, – the qualifications and scientific record of the principal investigator and other participants of the project, – the quality of the research infrastructure to be used in the project, – the level of organisational and financial support offered by the institution the applicant is affiliated at, – the adequacy of the expected costs to the declared objectives of the project, – the anticipated range of dissemination of the expected research results, – the anticipated social and economic impact of possible application of research results, – the compliance of research activities with the relevant normative acts. The misconduct associated with the preparation of grant applications consists, therefore, in providing false information increasing the value of the application in terms of the above-listed criteria. The most typical infringements are the following: fabrication and falsification of empirical and formal data,

82 ibid.

17.5 Research grant applications

375

making unjustified promises concerning expected research results, and linguistic manipulations aimed at increasing the probability of being financed, e.g. by “fitting” the proposed project to the current priorities of research funding agencies. Apart from the moral weakness of the applicants, there are several systemic factors contributing to the epidemic occurrence of those infringements. The highly competitive methodologies of the distribution of research funds, which exclude any substance-related discussion in evaluation of the proposed projects, are the most detrimental among them. Such methodologies are driving the process of replacing fair intellectual competition with unfair market competition: more and more grant applicants, when confronted with a moral dilemma “to be honest or to be financed”, are choosing the second option. An additional negative consequence of unhealthy competition in technoscience is the disappearance of informational openness and the disappearance of a real scientific discussion also outside the context of seeking funds for research. More and more researchers are aware that the premature disclosure of important research ideas may exclude their financing because the unfair competitors can be faster in including them in their grant applications. The demoralising impact of unhealthy competition may be detected in the language of grant applications which are, as a rule, stuffed with skilful phrases of selfpraise. The applicants quickly learn, by painful experience of being rejected, that – to reduce the probability of the next failure – they must efficiently use their ability to advertise and highlight their past achievements, and to promise more than potential competitors can do. It is, thus, a school of conceit and pride which – in all the great religious and wisdom traditions of humanity – have been considered deadly sins or disvalues, while humility has been perceived as an “aristocratic state of mind”. Although many scholars of today (as those of the past) are oversensitive to their own importance in the history of technoscience, humility still remains a noble ideal worth being pursued. The system of research financing based on individual grants has led – first in the USA, and next in other countries which copied that solution from the USA – to the atomisation of academia; for many years a research team, consisting of a single professor surrounded by a group of Ph.D. students, was an optimum from the point of view of the chances for “being financed”. For obvious reasons, this excluded the possibility of undertaking large, interdisciplinary and intellectually risky projects. Quite frequently, therefore, the proposed projects have not been defined in response to actual cognitive or utilitarian needs of a scientific discipline, but according to the structure of competences and promotional needs of research teams. Unsurprisingly, such practices have aroused significant moral doubts among the researchers being more aware of the science mission. Over and over again, they have had to find a rational answer to the question whether it would be better to apply for another grant to support routine research activities and financially strengthen the research team, or rather to take a risky attempt of personal

376

17 Ethical aspects of information processing

scientific development and help Ph.D. students in their attempts to follow this line83. Existing research funding systems engender numerous ethical problems related to the operation of their decision-making bodies. As a rule, the members of those bodies have access to key information about the methodology of decisionmaking, including information on current priorities regarding project types and ways of preparing applications. The moral doubts appear if they use this information to prepare their own applications or to advise fellow applicants on how to best prepare them. There are even more doubts about the different practices of selecting reviewers, aimed at obtaining the desired (personally motivated) result of projects evaluation. In order to counteract such practices, some research funding agencies require applicants to list the persons active in a relevant area of technoscience with whom they are related in a way which might be a source of a conflict of interest84.

83 cf. On Being a Scientist: Responsible Conduct in Research, 2009, p. 43. 84 ibid., pp. 43–44.

18 Legal protection of intellectual property This chapter contains basic information about legal regulations concerning intellectual property (Sections 18.2 and 18.3), elements of ethical analysis of them (Section 18.4), and an attempt to outline the trends of their evolution (Section 18.5). Distinguishing the legal and ethical aspects of intellectual property protection is important because of many doubts, of both ethical and pragmatic nature, which may be a source of moral dilemmas resulting from the conflict between the duty to observe the law and the values, especially moral values which to some extent may be threatened by legal regulations in force. Dura lex, sed lex – and therefore no statement in this chapter can be treated as a legal advice, nor an incentive to violate the law. The awareness of the existence of serious arguments against the legal protection of intellectual property does not absolve anyone from the duty to comply with the law in force, but it should encourage him to deepen ethical reflection over its current form.

18.1 Basic concepts related to intellectual property Intellectual property is defined by analogy to the material property which is traditionally understood as a set of rights of the owner of a material object to exclusively use that object, to exclude other entities (individuals, collectives or institutions) from its use and to make it available to selected entities by contract. An entity that does not respect the right to exclusive use of the object by its owner is a perpetrator of theft. The rationale justifying the existence of the property of material objects is their rarity: when an object cannot be simultaneously used by all interested entities, then the need for managing this object appears. Intellectual property (IP) is understood as a plurality of rights related to products of the human mind; i.e. products of intellectual activity in the literary, artistic, scientific and industrial fields. Thus, it applies not only to pieces of literature and music but also to scientific discoveries and inventions, computer programs, industrial designs and technologies, and trade information. Industrial property is a part of IP, including patents for invention, utility models, industrial designs, trademarks, trade names and geographical indications 1. The products of intellectual creation are object of IP regardless of the proportion between heuristic and algorithmic components they contain; this proportion has, however, an impact on the scope and nature of legal protection they may be eligible for. The idea of legal protection of IP finds a philosophical justification in the theories referring to:

1 Understanding Industrial Property, World Intellectual Property Organization, Geneva 2016 (2nd edition), http://www.wipo.int/edocs/pubdocs/en/wipo_pub_895_2016.pdf [2018-04-09], pp. 28–32. https://doi.org/10.1515/9783110584066-018

378

18 Legal protection of intellectual property

– a utilitarian conviction that society should stimulate creative activity of its members by introducing appropriate material incentives2; – the belief, implied by the John Locke’s understanding of the law of nature, that any creator should have the exclusive right to the fruits of his work, as long as this right does not infringe upon the common social interest3; – the views expressed by Georg W. F. Hegel and Immanuel Kant, according to which any piece of work is an externalisation of individual personality of its creator; so, legal protection of this piece of work favours the development of his personality4; – the views, expressed, i.a., by Thomas Jefferson (1743–1826) and “young” Karl Marx, according to which legal protection of creativity fosters the formation of just society and the development of culture5; – the theory of justice, proposed by John B. Rawls, according to which information is a primary good, and the legal protection of IP contributes to its fair distribution 6. The idea of legal protection of IP appeared in the fifteenth century as a consequence of the acceleration of the socioeconomic development of the European continent, manifested by an accelerated demographic growth, by the progress in mathematics and medicine, as well as by the flourishing of literature and fine arts 7. The first known patent, viz. a privilege for the production of stained glass, was given to John of Utynam by the King of England Henry VI in 1449 8. In Poland, the oldest patentlike privilege was given in the 1420s by the King Władysław Jagiełło to a carpenter, named Piotr, who mastered the art of building efficient drainage machines 9. The need to protect authors’ rights appeared after dissemination of printing technology and reading skills among the upper social classes in the sixteenth century, but the idea of copyright developed only in eighteenth-century England, where in

2 W. Fisher, “Theories of Intellectual Property”, [in] New Essays in the Legal and Political Theory of Property (Ed. S. Munzer), Cambridge University Press, Cambridge (UK) 2000; D. Moore, “Personality-Based, Rule-Utilitarian, and Lockean Justifications of Intellectual Property”, [in] The Handbook of Information and Computer Ethics (Eds. K. E. Himma, H. T. Tavani), Wiley & Sons, Hoboken (USA) 2008, pp. 105–130. 3 R. P. Merges, Justifying Intellectual Property, Harvard University Press, Cambridge (MA) & London 2011, pp. 31–67; W. Fisher, “Theories of Intellectual Property”, 2000; A. D. Moore, “PersonalityBased, Rule-Utilitarian, and Lockean Justifications of Intellectual Property”, 2008 Section 5.4. 4 R. P. Merges, Justifying Intellectual Property, 2011, pp. 68–101; W. Fisher, “Theories of Intellectual Property”, 2000; A. D. Moore, “Personality-Based, Rule-Utilitarian, and Lockean Justifications of Intellectual Property”, 2008, Section 5.2. 5 W. Fisher, “Theories of Intellectual Property”, 2000. 6 R. P. Merges, Justifying Intellectual Property, 2011, pp. 102–136. 7 cf. C. Murray, Human Accomplishment, 2003, pp. 309, 313, 314, 317 and 318. 8 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), p. 172. 9 M. du Vall, Prawo patentowe, Wyd. Wolters Kluwer Polska, Warszawa 2008, p. 40.

18.1 Basic concepts related to intellectual property

379

1710 the Parliament introduced protection of the rights of the authors of books and other publications 10. Legal regulations, regarding the protection of IP, were territorial from the very beginning. In France, for example, a copyright protection system (droit d’auteur), differing from the English system, developed at the end of the eighteenth century together with a system of industrial property protection (propriété industrielle). Next, the English and French legal solutions have become a source of inspiration for the lawmakers in other European countries and in the USA. The geographic diversification of the systems of IP protection has continued until today, despite numerous attempts to make them convergent at the international level, undertaken during nearly 140 years. The first international agreement, known under the name of the Paris Convention, was signed by 20 countries in 1883. Today, the 1994 Agreement on Trade-Related Aspects of Intellectual Property Rights (the TRIPS agreement)11 is a document of key importance for the functioning of IP in the international commercial exchange. According to this agreement, its signatories (164 countries12) honour each other’s legal regulations protecting IP; e.g. they treat the import of pirated software as piracy. Three principles are of key importance for the functioning of the TRIPS agreement: (1) the principle of minimum protection, (2) the principle of national treatment and (3) the principle of most-favoured-nation treatment. According to the first of these principles, the minimum standards of protection to be provided by each signatory are established. Each of the main elements of protection is defined in terms of the subject-matter to be protected, the rights to be conferred and permissible exceptions to those rights, and the minimum duration of protection (where applicable). The principle of national treatment requires each signatory of the TRIPS agreement to accord to the nationals of other signatories’ treatment no less favourable than that it accords to its own nationals with regard to the protection of IP. The principle of mostfavoured-nation treatment requires that any advantage, favour, privilege or immunity granted by a signatory to the nationals of any other signatory shall be accorded immediately and unconditionally to the nationals of all other signatories. Thus, the principle of national treatment is to counteract the discrimination of “foreigners” against “natives”, and the principle of most-favoured-nation treatment is to prevent discrimination of “some foreigners” against “other foreigners”. The TRIPS agreement is harmonising the national systems of legal protection of IP, but it is not removing all the differences among those systems. Any legal analysis of a transaction related to IP should be, therefore, considered in the context of legal regulations of the state under whose jurisdiction this transaction is completed. 10 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), p. 173. 11 whose text is available at https://www.wto.org/english/docs_e/legal_e/27-trips.pdf [2018-04-10]. 12 as of April 10, 2018, according to http://www.wipo.int/wipolex/en/other_treaties/parties.jsp? treaty_id=231&group_id=22.

380

18 Legal protection of intellectual property

Legal regulations concerning copyright and patent law are of fundamental importance for research activity in technoscience; they are, therefore, discussed in more detail in the next two sections. In turn, Section 18.4 is entirely devoted to a critical analysis of these regulations, including their impact on the effectiveness of intellectual creation, on the owner of intellectual products and on the other users of these products. In particular, the material benefits obtained by IP owners arouse many questions of moral nature, e.g.: – “What should be the rights of researchers to benefit from the IP of the institution they are affiliated at?” – “Should they retain any rights to the benefits from IP after leaving this institution?” – “To what extent should the partitioning of these benefits among researchers reflect their qualifications?”13

18.2 Legal protection of author’s rights Due to the common history and harmonising impact of the TRIPS agreement, the national systems of legal protection of author’s rights in the EU countries are very similar. An outline of the principal provisions in one of them seems to be, therefore, a sufficient background for considering ethical problems related to IP generated in scientific research. Due to the academic affiliation of the author of this book, that outline will be based on the Polish systems of legal protection of author’s rights as defined by the Act on Copyright and Neighbouring Rights Act, promulgated by the Polish Parliament in 1994 14, referred to in this section as the CNR Act. Due to the fact that the results of research are more and more often published in the journals and books published by institutions operating under American jurisdiction, some differences between American and Polish (European) provisions will be indicated.

18.2.1 Subject and owner of author’s rights According to the CNR Act, the object of author’s rights is any manifestation of creative activity of individual nature, established in any form, irrespective of its value, purpose or form of expression. The most important examples of such objects are the following:

13 cf. On Being a Scientist: Responsible Conduct in Research, 2009, p. 40. 14 Act of February 4, 1994, on Copyright and Neighboring Rights (as amended up to October 21, 2010), Parliament of the Republic of Poland, http://www.wipo.int/wipolex/en/text.jsp?file_id= 129377 [2018-04-12].

18.2 Legal protection of author’s rights

381

– literary, journalistic and scientific works expressed in words, mathematical symbols and graphic signs (including computer programs), – artistic and photographic works, – industrial design works, – architectural and urban planning works, – musical and lyrical works, – theatrical and musical works, as well as choreographic and pantomime works, – audio-visual works (including films). A work to be protected by author’s rights is assumed to be original, but it does not have to be completely new and unobvious – minimal traces of individual creativity are sufficient to meet this requirement. That’s why it is possible to obtain legal protection for the reproduction of a picture after introducing some non-trivial modifications, or – for a compilation of facts if their selection or composition is creative. It is worth noting here that the requirement of originality is understood in a similar way both under Polish (European) and American law 15. Since the protection may apply to the form of expression only, it cannot cover discoveries, ideas and theories, methods and procedures, principles of operation and mathematical concepts. This provision, excluding the universal protection of abstract contents, is supported by the argument that their monopolisation would drastically limit the creative freedom of society. For pragmatic reasons, no protection is guaranteed, moreover, to: legislative acts and their official drafts; official documents, materials, logos and symbols; published patent and industrial-design specifications; simple press information; commonly known artistic and architectural forms; elements of works, lacking originality, such as statistical tables, standard drawings or alphabetic lists of terms. An author is granted the author’s rights to his work when this work is accomplished, i.e. when an idea is manifested using selected material means of expression; he is not obliged to complete any formalities, such as placing the symbol © on a copy of the work. In the case of a text work, only the sequence of alphanumeric characters, being the form of expression of an idea, is protected. Since the idea itself (regardless of the degree of its originality) remains unprotected, the plagiarism of ideas, reprehensible from an ethical point of view, is not an infringement of the CNR Act, while exact copying of the description of some ideas would be both morally and legally wrong if done in a manner inconsistent with the principles of citation (cf. Subsection 18.2.4).

15 T. D. Mays, “Ownership of Data and Intellectual Property”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 211–245.

382

18 Legal protection of intellectual property

Example 18.1: In 2007, Karl-Theodor Freiherr zu Guttenberg, the Minister of Defence of Germany in 2009–2011, obtained his doctoral degree at the faculty of law and economics of a Bavarian university (Rechts- und Wirtschaftswissenschaftliche Fakultät der Universität Bayreuth) for the dissertation entitled Constitution and Constitutional Treaty – Constitutional stages of development in the USA and in the EU (Verfassung und Verfassungsvertrag – Konstitutionelle Entwicklungsstufen in den USA und der EU). In January 2010, it was found that the dissertation is largely plagiarised. A detailed comparison of ca. 1200 fragments, copied without any reference to the sources, with the original texts can be found on the GuttenPlag Wiki website16, and full information about the case – on the university’s website17. The scandal shocked German public opinion and provoked a very strong reaction of the academic community: over 3,000 of its representatives signed the declaration on academic standards, entitled Erklärung von Hochschullehrerinnen und Hochschullehrern zu den Standards akademischer Prüfungen 18.

It is presumed that the author is the person whose name has been indicated as the author on the copies of a work or whose authorship has been announced to the public in any other way during the dissemination of that work. The co-authors enjoy copyright jointly. It is presumed that the amounts of shares are equal, but each of the co-authors may claim the amounts of shares to be determined by the court on the basis of his creative contribution to the work.

18.2.2 Moral versus economic author’s rights The model of author’s rights, adopted in Poland, is referring to the French tradition of separating two autonomous categories of rights, viz. moral rights and economic rights. The moral author’s rights are to protect the link between the author and his work. They include the rights: – to be an author of the work, – to sign the work with the author’s name or pseudonym, or to make it available to the public anonymously, – to have the contents and form of the author’s work inviolable and properly used, – to decide on making the work available to the public for the first time, – to control the manner of using the work. The moral author’s rights are unlimited in time and non-transferable. There is, however, an exception: some rights concerning computer programs may be transferred

16 available at http://de.guttenplag.wikia.com/wiki/GuttenPlag_Wiki [2018.05.05]. 17 available at http://www.neu.uni-bayreuth.de/de/Uni_Bayreuth/Startseite/info-zur-Causa-Gut tenberg/index.html [2018-05-05]. 18 available at http://www.him.uni-bonn.de/uploads/media/Erklaerung.pdf [2018-05-05].

18.2 Legal protection of author’s rights

383

for a fee, viz. the right to inviolability of their content and form, the right to decide on their first release and the right to supervise the manner of their use. The economic author’s rights are to provide the author with the exclusive power to use his work and to manage its use throughout all the fields of exploitation and to receive remuneration for the use of that work. Those rights belong originally to the author, but they are transferrable: they may devolve to other individuals or institution through inheritance or by contract; a contract for the transfer of the author’s economic rights or for the use of the work, called licence, should cover the fields of exploitation specified expressly therein. The economic author’s rights are called copyright in the countries following the Anglo-Saxon legal tradition which is not separating moral rights and economic rights; for brevity, this term will be used hereinafter. The copyright expires after the lapse of 70 years in the EU countries and 90 years in the USA: – in the case of individual works – from the death of the author, – in the case of joint works – from the death of the co-author who has survived the others, – in the case of a work the author of which is unknown – from the date of the first dissemination, unless the pseudonym does not arouse any doubts as to the author’s identity or if the author disclosed his identity, – in the case of a work with respect to which the copyright is enjoyed by a person other than the author – from the date of the first dissemination. According to the CNR Act, preparing a derivative work or reproducing a database possessing the features of a piece of work by a legal user of the database or a copy thereof shall not require permission of the author of the database if it is required for access to the contents of the database and for normal use of its contents. Copyright applies to the use of the work in both material and non-material form. In the first case, it is the reproduction of a work, the use of its adaptation and its rental, lending or resale; in the latter – public performance of the work, its displaying, broadcasting or making available on the internet. It is worth emphasising that conversion of the work to an electronic form does not lead to the creation of a new work, and therefore the dissemination of the electronic version – in particular on the internet or on CDs – is covered by the original copyright. Unless a contract of employment states otherwise, the employer, whose employee has created a piece of work within the scope of his duties resulting from the employment relationship, is authorised, upon acceptance of the work, to acquire copyright within the limits resulting from the purpose of the employment contract. If, however, an employee creates a certain work on his own initiative, then he is entitled to the copyright. If, for example, an academic researcher prepares an article

384

18 Legal protection of intellectual property

that he is not obliged to write, he will become the holder of copyright. It is worth noting here that analogous provisions apply in the USA19. There are three categories of the situations when it is allowed to use, free of charge and without the permission of the author, his work protected by copyright, viz.: – It is allowed to personally use a single copy of the work, disseminated with the consent of its author, and to make this copy available to a circle of people having personal relationships, in particular – any consanguinity, affinity or social relationship (more details in Subsection 18.2.3). – Research and educational institutions are allowed, for teaching purposes or in order to conduct their own research, to use disseminated works in original and in translation, and to make copies of fragments of the disseminated work (more details in Subsection 18.2.4). – It is allowed to quote, referring to the work, its fragments (or even in full if it is a minor work) within the scope justified by explanation, critical analysis, teaching or the rights governing a given kind of creative activity (more details in Subsection 18.2.4). While moral author’s rights usually do not provoke any ethical or pragmatic controversies, the economic author’s rights – including their justification and social consequences – are subject to unceasing debates (more details in Subsection 18.2.4).

18.2.3 Scope of personal permissible use The CNR Act defines the permissible personal use of a work protected by copyright. It enables a person (but not a legal entity) to use, up to a certain extent, an already disseminated work – together with people who remain in a close relationship to that person; it should be noted that the shared use of the work which consists in its electronic multiplication is not excluded. The adequate interpretation of this provision requires some clarification concerning the circle of people who are in a close personal relationship with the owner of a copy of the work: a lasting and informal private bond seems to be necessary, while sporadic and institutional-only relationships (such as belonging to a group of employees or to a group of students) is not sufficient. The permissible personal use does not include computer programs and electronic databases that match the definition of a work assumed by the CNR Act (unless this applies to its own scientific use not related to a profit goal). It should be noted, moreover, that the copyright protection covers only the listing of a program, and not the algorithms implemented in this program or its logical structure.

19 T. D. Mays, “Ownership of Data and Intellectual Property”, 2005, p. 221.

18.2 Legal protection of author’s rights

385

There is a lot of uncertainty about permissible copying of books by academic staff and students. The liberal interpretation of the corresponding provisions of the CNR Act enables one to gather that it is not unlawful to copy for own needs (as well as for persons remaining in a personal relationship) handbooks, scientific books and journal papers. The American provisions concerning copyright, however, yield grounds for restricting the maximum number of pages copied from one book, and such restrictions are usually rigorously enforced in American universities. There are also frequently expressed doubts about the practice of selling to students, at a price of copying, selected fragments (even chapters) from the textbook. It seems that due to the fact that “reproduction of works for the personal use of third parties” is allowed, this is not an illegal activity under Polish copyright law. On the basis of the American copyright law, however, where copying a scientific article for personal use is also permissible, such a practice is considered a copyright infringement, because it reduces the market value of a copied textbook, as it makes students less likely to buy it20. According to the Article 6 of the EU Directive 2001/29/EC21, it is prohibited to remove technical safeguards blocking access to electronic works, regardless of whether the works accessed in this way are further used lawfully or not. The implementation of this recommendation significantly hinders scientific use of electronic works made available by academic libraries only in the “reading mode” since a researcher interested in quoting such works has to manually take notes instead of making a copy-paste operation.

18.2.4 Citation rules Quoting is a form of fair use of works which not only serves balancing the interests of the authors of works and their users but also the development of technoscience and the dissemination of its findings. The scope of its applicability is defined by the CNR Act (Articles 26–29) in the following way: – It is permitted to quote, in the reports of current events, the works made available in the course of such events, however, within the limits justified by the purpose of the information. – Research and educational institutions are allowed, for research or teaching purposes, to use disseminated works in original and in translation, and to make copies of fragments of those works. – Libraries, archives and schools are allowed: to provide free access to copies of disseminated works; to make copies of disseminated works in order to supplement 20 ibid., p. 222. 21 “Directive 2001/29/EC of the European Parliament and of the Council, of 22 May 2001, on the Harmonisation of Certain Aspects of Copyright and Related Rights in the Information Society”, Official Journal of the European Communities, 2001, No. L167, pp. 10–19.

386

18 Legal protection of intellectual property

them, or to maintain or protect one’s own collections; and to make the collection available for research or learning purposes through electronic channels. – It is permitted to quote, in works constituting an independent whole, fragments of disseminated works or minor works in full, within the scope justified by explanation, critical analysis, teaching or the rights governing a given kind of creative activity. – It is permissible to include, for teaching or research purposes, disseminated minor works or excerpts from larger works in textbooks and anthologies. In all the above-mentioned cases, the size of a quotation should be adequate to the purpose it is to serve. It is difficult to provide a completely general interpretation of this requirement. There are, however, some rules of conduct which may be useful for the assessment of individual cases; their brief analysis, provided here, is limited to the scope of their applicability in technoscientific publications where quoting is used to characterise the state of knowledge in a subject domain, to characterise the views and opinions expressed so far, to support or supplement author’s own argumentation, and to challenge someone else’s argumentation. A quoted text should visibly differ from the context as to enable the reader to unambiguously identify it, in particular – its beginning and end. This may be achieved by using quotation marks (quotes) or applying different font, size or colour to quoted text (or any combination of those features). It should be noted that only the text faithfully quoted after the source is to be distinguished; any paraphrase of the original text (i.e. the presentation of its meaning using other words) is not to be distinguished, but – like a quotation – must be accompanied by a reference to the source. There is a practical problem related to a paraphrase longer than one sentence: there is no unique and generally accepted way to indicate the beginning of such paraphrases. In this book, for example, the beginning of such a paraphrase is marked with the beginning of a paragraph, and its end – with a reference to the source, placed after the full stop of the last sentence in that paragraph. This is a space-saving solution, but not very reliable from the point of view of the reader’s perception. Most often, the beginning of a paraphrase is signalled with a phrase of type “J. Brown proposes. . .” in its first sentence, and the end of that paraphrase – with a phrase of the type “J. Brown concludes. . .” in its last sentence. The bibliographic data of the source of a quotation should be detailed and precise enough to enable the reader to unambiguously identify this source. In the case of works available on the internet, in addition to the address of the relevant website, the date of checking its content, for the purpose of quotation, should be provided. In the case of a book or a more extensive article, it is also advisable to make it easier for the reader to reach the quoted passage by indicating pages, a section or a chapter where the original text is located. It is both legally and morally acceptable to refer to a secondary source of a quoted text, i.e. to a source quoting this text after its primary (original) source. The bibliographic description of such a quotation should, however,

18.2 Legal protection of author’s rights

387

leave no doubts as to its primary and secondary source. The requirement of referencing to sources of information does not apply to facts and claims which are commonly known or obvious. It would be counterproductive to oblige authors of scientific publications – for the sake of copyright protection – to cite the sources of all known pieces of knowledge they use in reporting their research, including those most trivial, colloquial or well-known to the potential readers. The author of a work, being the holder of the copyright for this work may lawfully reuse its parts in his further creative activity. He is, however, morally obliged to inform the reader about the origin of the reused parts. The author of a technoscientific publication may legitimately refer to his own works when confronting his own views with the views of others, his own current views – with his own views from the past; he may also document, in this way, his temporal priority with respect to an invention, discovery or concept. The failure to provide information on the origin of the reused parts is an instance of misconduct called self-plagiarism. It is reprehensible from both deontological and consequentialist point of view – as an unjustified departure from good academic practice, as well as an act which may mislead both the readers and evaluators of the author’s academic output. The need to cite the original source appears, in particular, when a corrected, updated and expanded version of a conference paper is published as a journal paper or a chapter of a post-conference book. The failure to provide information on the origin of the reused parts may occur to be a violation of the copyright law if the author previously transferred the copyrights associated with the original, e.g., to a publishing house. To avoid misunderstandings, one should carefully read the terms and conditions of the copyright transfer, especially if it is based on the law of a foreign country. Example 18.2: It is impossible to publish an article in any of the prestigious scientific journals, issued under auspices of the Institute of Electronic and Electrical Engineers (IEEE), without transferring copyrights to this American professional association. The author of a manuscript, submitted to the editorial office of any of those journals, has to sign the IEEE Copyright Form which contains the following provision: “Authors/employers may reproduce or authorize others to reproduce the Work, material extracted verbatim from the Work, or derivative works for the author’s personal use or for company use, provided that the source and the IEEE copyright notice are indicated”22. Until 2010, this document obliged the authors (or employers) to request “permission from the IEEE Copyrights Office to reproduce or authorize the reproduction of the work or material extracted verbatim from the work, including figures and tables”. This requirement – which meant, in practice, that the reuse of a drawing from one’s own article published in an IEEE journal was legally impossible without permission – was an unpleasant surprise for many European authors.

22 available at https://www.eptc-ieee.net/uploads/exhibitors-pdf/59c22cf8b853e_Copyright% 20form.pdf [2018-04-14].

388

18 Legal protection of intellectual property

Taking into account that, by definition, the term plagiarism means a kind of theft, the concept of reverse plagiarism may seem to be an oxymoron. It is not, however, if it is understood as an infringement, committed by the author of a scientific publication, which consists in assigning to somebody, especially an “authority”, a work that he did not create, or in giving credit to a source which has not been used in that publication. The motivation of the author in both cases is to strengthen the credibility of his own publication. It should be noted, however, that in the first case, the author may be accused of the violation of personal goods (in the form of broadly understood scientific creation) of the falsely cited person, or of the violation of this person’s good name. The most frequently asked questions concerning citation rules address the admissible size of the quotation and the admissibility of a quotation including drawings and tables. There is no agreement about general answers to these questions; so, they should be answered on the case-by-case basis, taking into account ethical principles and specific circumstances. Divergent views are also expressed about the admissibility of displaying electronic versions of scientific publications on the websites. Over time, more and more publishers (IEEE may be an example23) allow this practice provided full information about the copyright holder and about the rules of using the displayed publication is attached. The plagiarism of the form of expression, which is a violation of the law, and the plagiarism of ideas, which is a violation of ethical principles “only”, have been extensively discussed by the academic milieus. Simple search on the internet enables one to quickly realise how important the extent of this phenomenon is. Various institutions, including the most important publishers and service providers, offer internet tools for detecting plagiarism 24. This may indicate both an increase of its incidence and a decrease in the tolerance towards it. The question arises as to whether the current legal solutions regarding plagiarism are adequate to the needs of the information society, and if so – whether the judicial decisions concerning the plagiarism cases are sufficiently clear to ensure effective counteracting this phenomenon in the times of common availability of the internet and electronic tools for copying works protected by copyright. The divergence of views concerning this question does not facilitate the actions aimed at appropriate reforms.

23 cf. the section “Policy” of the document available at http://www.ieee.org/publications_stand ards/publications/rights/copyrightmain.html [2018-04-15]. 24 An overview is available, for example, at https://elearningindustry.com/top-10-free-plagiarismdetection-tools-for-teachers [2018-04-30].

18.3 Legal protection of inventor’s rights

389

Example 18.3: In 2006, William Swanson, the head of Raytheon Co., was accused of committing plagiarism by copying – in his book Swanson’s Unwritten Rules of Management25 – the descriptions of 16 management rules (out of 33) from the William J. King’s book The Unwritten Laws of Engineering, published in 1944. The Raytheon company punished him through lowering his income by a million dollars. As the discussion – carried out in the magazine The Institute26 – demonstrated, there had been also defenders of the William Swanson’s case. . .

18.3 Legal protection of inventor’s rights The national systems of legal protection of inventor’s rights in the EU countries are very similar for the same reasons as the national systems of legal protection of author’s rights are. An outline of the principal provisions in one of them seems to be, therefore, a sufficient background for consideration of ethical issues, related to IP, which appear in technoscientific research. Again, due to the academic affiliation of the author of this book, that outline will be provided for the Polish system of legal protection of author’s rights as defined by the Industrial Property Law, promulgated by the Polish Parliament in 200027, referred to in this section as the IPL Act. Since research results are more and more often patented in the US Patent Office, also by non-Americans, some differences between American and European provisions will be indicated. The special significance of the US patent system is due to its exceptional efficiency and dynamics of functioning. Pioneering solutions, introduced in this system, usually become a model for solutions introduced in other countries some time later. This was the case, for example, with the consecutive steps made towards granting protection to inventions in the field of biotechnology and computer technology.

18.3.1 Subject of inventor’s rights The patent28 is a set of rights to the exclusive use of an invention by a person or by a legal entity, granted by a patent office, and confirmed by a patent document containing the description of this invention and the claims defining the scope of its legal protection. The invention to be patentable must be a solution of a technical

25 W. H. Swanson, Swanson’s Unwritten Rules of Management, Raytheon Co., Waltham (USA) 2004. 26 B. Munroe, “Borrowed Rules”, IEEE – The Institute, 2006, Vol. 30, No. 4, p. 5. 27 Act of June 30, 2000, on Industrial Property, 2007) (as amended up to June 29, Parliament of the Republic of Poland, http://www.wipo.int/wipolex/en/text.jsp?file_id=194987 [2018-04-17]). 28 from Latin patere = “to open” or “to make available for public inspection”.

390

18 Legal protection of intellectual property

problem that meets three criteria, viz.: it is new, non-obvious (inventive) and useful or industrially applicable. An invention is considered new if it is not part of the state of the art in the relevant subject-matter on the day on which priority to obtain a patent is determined. The state of the art should be understood here as everything made available to the public by means of a written or oral description, as well as by using, displaying or disclosing in any other way before that day. An invention cannot be considered new if, for example: – a product based on this invention has already been available on the market, – its essential idea has been presented at a conference or published in a journal article, – this idea was disclosed in a published description of a patent which has not been granted, – all its distinctive features, indicated in the claims, are elements of the state of the art. An invention is non-obvious if it contains at least one inventive element, i.e. the element which cannot be logically derived from the state of the art by “a person skilful in the art”. The latter term is a legal fiction referring to an abstract person assumed to have the average skills and knowledge in the relevant subject-matter (without being a genius), and therefore being able to make an objective assessment of the invention claimed in the patent application. The criterion of industrial applicability is met by an invention if it can be implemented technically in a reproducible manner, i.e. if by means of that invention a product may be manufactured or a process may be used, in a technical sense, in any kind of industry or in agriculture. This fact is considered as documented if the patent application contains the exemplary embodiment of the invention, i.e. its description and justification including technical means necessary for its implementation in a reproducible and safe manner. It is worth noting that it is not required that the implemented invention be effective. One can patent: products, machines and devices; manufacturing methods and methods of operation; material products defined by means of technical features (referring to the construction or composition of the invention); and specific methods for exerting technical influence on matter. It should be noted that the mere discovery of an unknown substance, found in nature, is not patentable, but a method for producing a new technical effect by means of this substance may be patented, e.g. a method for using a newly discovered gene in gene therapy. According to the IPL Act, the following accomplishments are not patentable: – discoveries, scientific theories and mathematical methods; – creations of exclusively aesthetic value; – schemes, rules and methods for performing mental acts, doing business or playing games;

18.3 Legal protection of inventor’s rights

391

– creations whose workability or utility may be proven impossible in light of generally accepted scientific knowledge (e.g. perpetum mobile); – programs for computers; – presentations of information. Moreover, patents cannot be granted for: – inventions whose use would endanger public order or morality (e.g. cloning of people or modification of the genetic identity of the human germline); – plant or animal varieties or essentially biological processes for the production of plants or animals (except for microbiological processes or the products thereof); – methods for treatment of the human or animal body by surgery or therapy or diagnostic methods applied on human or animal bodies (except for products applied in diagnostics or treatment). Mathematical methods, including numerical algorithms, are not patentable, but their specific technical implementations – are. For this reason, almost until the end of the 1990s patent applications were rejected by the US Patent Office if they contained claims confined to numeric algorithms only, but they were considered if at least one claim referred to the use of a device or system. That’s why, in the years of the profuse development of specialised applications of computers for the processing of measurement data, a large number of American patents were issued, whose titles began with the words “Method and apparatus for . . .”29. At the end of the 1990s, the tendency to recognise the patentability of methods themselves appeared in the US patent jurisprudence30 because in 1998 the US Federal Court of Appeals ruled that a general-purpose computer with software implementing a business procedure could be patentable31. During the decade 1998–2008, the US Patent Office was flooded with patent applications of this category. The 2008 reaction of the US Patent Office to this phenomenon was the withdrawal from granting patents for methods32. However, exclusively in the USA, one may still patent software. There is some controversy about the exclusion of inventions whose use could violate public order or morality. This exemption appeared as an objection to using the state authorities for supporting inventions that defy fundamental values. A lot of ethical and scientific controversies arise, for example, from patenting biological material, which has been possible in the USA for almost 40 years.

29 e.g. A. Barwicz, R. Z. Morawski, M. Ben Slima, Apparatus and Method for Light Spectrum Measurement, US Patent #6,002,479 issued on December 14, 1999. 30 e.g. R. Z. Morawski, A. Barwicz, M. Ben Slima, A. Miękina, Method of Interpreting Spectrometric Data, US Patent #5,991,023 issued on November 23, 1999. 31 T. D. Mays, “Ownership of Data and Intellectual Property”, 2005, p. 232. 32 S. J. Frank, “The Death of Business-Method Patents”, IEEE Spectrum, March 2009, pp. 32–35.

392

18 Legal protection of intellectual property

Example 18.4: In 1980, the US Supreme Court ruled that a new strain of bacteria, developed by Ananda M. Chakrabarty by means of recombinant DNA techniques, can be patented because it is a useful product with characteristics unlike any found in nature; useful because being capable of metabolising crude oil, and therefore applicable for cleaning up oil spills33. This decision of the US Supreme Court opened the door to patenting – first in the USA and next in other countries that followed this way – DNA, RNA, proteins, hormones, cell lines, microorganisms, genetic tests, gene therapy techniques, recombinant DNA techniques, genetically modified plants and mice.

18.3.2 Moral and economic inventor’s rights According to the IPL Act, a patent is guaranteeing to its holder the exclusive right to exploit the invention, for profit or for professional purposes, throughout the territory of the Republic of Poland, during 20 years. It should be noted that the scope of legal protection, guaranteed to the patent’s holder, is determined only by the claims, contained in the patent, while the specification of the invention with corresponding drawings, contained there, may be used only for interpretation of those claims. The patent holder has the right to prevent any third party, not having his consent, from exploiting his invention for profit or for professional purposes by way of performing the acts consisting of any of the following actions: – making, using, offering, putting on the market a product that is the subjectmatter of the invention; – importing the product for such purposes; – employing a process that is the subject-matter of the invention, as well as using, offering, putting on the market or importing for such purposes the product directly obtained by that process. The patent holder has the right to authorise another party to exploit his invention by means of a licence agreement. It should be noted, however, that the licence is required for the professional profit-oriented use of the invention, but not for its private non-profit application. It should be also noted that the patent does not always give its holder the right to unconditionally use the invention. This is due to the increasing complexity of patented solutions and the growing dependence of technical solutions which are protected by various patents. As a consequence, the holder of a patent for the improvement of a certain device cannot sell the design of an upgraded version of this device if he does not obtain a licence for using an earlier patent protecting the older version of this device.

33 On the basis of this legal decision, the US Patent Office granted the following patent: A. M. Chakrabarty, Microorganisms Having Multiple Compatible Degradative Energy-Generating Plasmids and Preparation Thereof, US Patent #4,259,444A issued on March 31, 1981.

18.3 Legal protection of inventor’s rights

393

According to the IPL Act, the Patent Office should publish the filed application immediately after the expiry of eighteen months from the date of priority since the disclosure of the invention is a sine qua non conditions of patenting it. In the case of many co-inventors, the invention cannot be patented if at least one of them opposes to its disclosure. An inventor is entitled to remuneration for the exploitation of his invention by an economic entity if such entity enjoys the right to exploit it or the right to the corresponding patent. When the invention has been made as part of the inventor’s duties resulting from the employment contract, the employer becomes the owner of the patent. Even then, however, the inventor is entitled to remuneration from the employer if the latter is using the invention. If not agreed otherwise by the inventor and his employer, the amount of remuneration is determined in due proportion to the profits of the employer, taking into account the circumstances in which the invention has been made, in particular – the extent to which the inventor has been supported by the employer in making the invention, as well as the scope of the inventor’s duties determined by the employment contract.

18.3.3 Patenting procedure An inventor, interested in obtaining legal protection for his invention, has to submit the corresponding patent application to the appropriate patent office – the Patent Office of the Republic of Poland if he is interested in the protection on the territory of Poland. It should contain: – the definition of the subject-matter; – a general introduction, explaining the technoscientific foundations of the invention and its practical significance; – a description of the invention disclosing its nature and technical details, including guidance on its practical feasibility; – one or more claims, defining the scope of the patent; – an abstract. The first three elements of the application play a key role in the procedure for checking whether the invention meets the patentability criteria (referred to in Subsection 18.3.1). According to the IPL Act, they should disclose the invention in a manner sufficiently clear and complete for it to be implemented by a person skilled in the art; in particular, the description should indicate the background art known to the applicant. They should also present the invention in a detailed manner, describe the figures (if provided), and indicate – using examples – the ways of carrying out or exploiting the invention claimed. The patent claims should be entirely supported by the description of the invention, and they should define – in a clear and concise manner – the scope of the requested protection by indicating the

394

18 Legal protection of intellectual property

technical features of the invention. Each claim should be presented in a single sentence only or in an equivalent of a sentence. The claims are legally interpreted in light of information provided in the description of the invention. So, if the latter is not exhaustive enough as to the possible embodiments of the invention, some solutions – although covered by the claims – may remain beyond the scope of legal protection. A patent application, received by the Patent Office, is subject to substantive and legal scrutiny lasting 6–18 months; if it meets the criteria of patentability, then it is published in full. During the next 6 months, interested parties may submit objections if they find that the patent application pertains to an already patented invention, or the proposed invention “cannot work” for some technoscientific reasons. If after considering those objections, the Patent Office grants the patent, it informs about this fact all other patent offices around the world. Within the next 12 months, the patent holder (or his legal successor) may submit the same application to the patent office of any country being a signatory of the 1883 Paris Convention for the Protection of Industrial Property34, using the so-called pre-emptive rights; anyone who submits the same or very similar invention at that time will not obtain a patent. Until 1978, the only way to obtain the patents for the same invention in several countries was to make a separate application in each of those countries – in the language of each country and according to the rules valid in each country. In 1978, the Patent Cooperation Treaty (PCT), signed in 1970, came into force; it has set up a system that allows to grant patent protection in any group of countries, from more than 150 signatories of this treaty, on the basis of an application written in a single language and submitted to a single patent office. This application, in addition to standard substantive elements, contains a list of countries in which the applicant wishes to obtain protection. It should be noted, however, that the PCT has only simplified the application phase of the patenting procedure while mere granting of patents has remained the exclusive domain of national patent offices.

18.3.4 Limits of inventor’s rights According to the IPL Act, the following are not considered acts of infringement of a patent: – the exploitation of an invention concerning means of transport or their parts or accessories, temporarily located on the territory of the Republic of Poland or concerning articles which are in transit through its territory;

34 Its text is available on the website of the World Intellectual Property Organization located at http://www.wipo.int/treaties/en/text.jsp?file_id=288514 [2018-04-21].

18.3 Legal protection of inventor’s rights

395

– the use of an invention for national purposes, to a necessary extent, without the exclusive right, where it has been found indispensable to prevent or eliminate a state of emergency relating to vital interests of the country, in particular – to security or public order; – the use of an invention for research and teaching purposes; – the exploitation of an invention to a necessary extent, for the purpose of performing the acts as required under the provisions of law for obtaining registration or authorisation, being (due to the intended use thereof) requisite for certain products (e.g. pharmaceutical products) to be allowed for putting them on the market; – the extemporaneous preparation of a medicine in a pharmacy on a physician’s prescription. Moreover, any person who, on the date according to which the priority for the grant of a patent is determined, has exploited an invention in good faith, may continue to exploit it in his enterprise free of payment to the extent to which he had previously exploited it. It should be noted that the use of a patented invention without licencing in scientific research may entail legal problems when the results of that research are commercialised – even if the patented invention has not been directly applied in a market product, but its development would not be possible without research in which this invention was used. Example 18.5: Pursuant to the 2002 decision of the US Court of Appeals for the Federal Circuit, the Duke University violated the John Madey‘s patent on a laser, by using this device in scientific research without his permission, because the results of this research were next used for commercial purposes35.

Following the international agreements, the Patent Office of the Republic of Poland may grant authorisation to exploit a patented invention of another person (a compulsory licence), in the following situations: – It is necessary to prevent or eliminate a state of national emergency caused by the need to defend the borders against military aggression, to maintain public order, to protect human life and health, or to protect natural environment. – The patent holder has abused his rights (in particular, by preventing the invention from being exploited by a third party) when such exploitation is necessary for the purpose of meeting home market demands, and is dictated by public interest considerations, and consumers are supplied with the product in insufficient quantity or of inadequate quality or at excessively high prices.

35 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), p. 176.

396

18 Legal protection of intellectual property

– The patent holder – enjoying the right of priority of an earlier application (an earlier patent) – prevents, by refusing to conclude a licence contract, the meeting of home market demands through the exploitation of the patented invention (the dependent patent) whose exploitation would encroach upon the earlier patent; in such case, the holder of the earlier patent may demand that an authorisation be given to him for the exploitation of the invention that is the subject-matter of the dependent patent. The compulsory licence is granted in the latter case conditionally upon ascertainment that the exploitation of the invention that is the subject-matter of the dependent patent, where both inventions concern the same subject-matter, involves an important technical advance of considerable economic significance. The owner of a patent protecting the invention, used on the basis of a compulsory licence in order to prevent the threat to important state interests, is entitled to remuneration from the state budget in the amount corresponding to the market value of the licence. It should be noted that there is no compulsory licensing in the US patent system: the patent owner may, therefore, block the practical application of the invention by refusing to grant any licence for the use of this patent36.

18.3.5 Justification of patent system In addition to the philosophical arguments for the protection of IP, mentioned in Section 18.1, the justification of the patent system is usually referring to economic benefits derived from this system by inventors, manufacturers of the products, in which the inventions are applied, and buyers of those products. The patent system is beneficial for inventors in various ways: they obtain additional remuneration, often much higher (per unit of working time) than their remuneration resulting from the employment contract, their motivation to make and disclose further inventions is strengthened, they receive an incentive to start their own entrepreneurial activity based on a patent package or to enter into a joint entrepreneurial activity with such a package, and they get an opportunity to improve their professional status. Moreover, the patent system ensures that inventors, like other members of society, have unlimited access to information about inventions patented all over the world, which is essential for the effectiveness of their own creative activity. The most important benefits of entrepreneurs are the following: they obtain exclusive rights to inventions being of key importance for their enterprises and (through a proper licencing policy) to entire technologies, the capital of their

36 D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, p. 137.

18.4 Critical analysis of legal protection of economic author’s . . .

397

enterprises is increasing due to the market value of IP comprising a package of patents, they get access to information about technologies developed by their competitors, and they obtain an incentive tool useful in managing the most qualified staff of the enterprise, as well as in managing technologies. There are, moreover, commonly recognised social benefits resulting from the existence of the patent system, viz.: accelerated dissemination of innovations resulting from the disclosure of inventions, stimulation of the research-and-development activity of enterprises and research institutions and enhanced implementation of new technoscientific achievements in broadly understood social practice. The patent system may encourage an enterprise to invest in research-anddevelopment projects because it prevents the competitors from the unlimited use of the results of those projects. It is believed that without such a system the total expenditures on research and development would be smaller and the progress would be slower. It is also believed that the patent system facilitates the penetration of innovations in society because the details of new technologies are quickly available to potential inventors of their further improvements, and after 20 years (when patent protection is over) inventions can be freely used by society.

18.4 Critical analysis of legal protection of economic author’s and inventor’s rights Illegal acts are classified by the jurisprudence into two categories: those which are formally illegal but not morally wrong (called in Latin mala prohibita37), and those bad in themselves (called in Latin mala in se38). The first are usually forbidden mainly in the interest of maintaining safety and order, whereas the latter – also because they endanger moral order of society. Copyright infringements are usually considered to represent the first category 39. The arguments against legal protection of economic author’s and inventor’s rights, overviewed in this section, are provided to support this stance. Some of those arguments reveal weak points of various justifications of this protection, some others – the negative practical consequences of its

37 cf. the Wikipedia article “Malum prohibitum” available at https://en.wikipedia.org/wiki/ Malum_prohibitum [2018-07-01]. 38 cf. the Wikipedia article “Malum in se” available at https://en.wikipedia.org/wiki/Malum_in_se [2018-07-01]. 39 R. Anderson, “Is Copyright Piracy Morally Wrong or Merely Illegal? The Malum Prohibitum/ Malum in Se Conundrum”, Scholary Kitchen, April 30, 2018, https://scholarlykitchen.sspnet.org/ 2018/04/30/copyright-piracy-morally-wrong-merely-illegal-malum-prohibitum-malum-se-conun drum/ [2018-06-20].

398

18 Legal protection of intellectual property

functioning. More detailed treatment of this subject can be found in the relevant literature40.

18.4.1 General philosophical argumentation IP is treated by the advocates of the concept of natural law as an element of natural rights: products of the human mind deserve the same protection as material property because they result from human work and humans are the owners of their work. According to John Locke, someone who wrote a book should be its owner because he put his work into it41. However, attempted application of this criterion to other situations is facing important difficulties.

Example 18.6: Someone who has put a lot of work in developing an invention will not receive a patent if this invention has already been patented by someone else; in turn, the author of a fourline poem, whose creation took him 3 minutes, is the holder of full copyright to this poem. On the one hand, both a professor, who has worked 10 hours on a certain invention, and his Ph.D. student, who has spent 100 hours on the development of this invention, may be equally entitled to the ownership of the patent protecting this invention; on the other hand, a technician who has performed full-time routine laboratory work for 6 months may be not42.

In the situations where the above exemplified doubts do not appear, the legal protection of IP may be viewed as a reward for the productive work of a creator. Even if it seems to be fair that the latter is rewarded for making available the results of his work to society, it is difficult to accept the benefits of his heirs who did not contribute to this work in any way. Even if the sculptor’s right to possess the material carrier of his work, i.e. a marble block is undisputable, it would be difficult to logically derive from this premise his exclusive right to the idea he has embodied in this block, e.g. an abstractly understood portrait of Socrates, carved in this block. The problem does not disappear when the tentative justification refers to the economic value of the work since the value as such cannot be possessed. Moreover, since the economic value of a good is related to its rarity, the artificial creation of its rarity can increase its economic value43.

40 e.g. B. Martin, “Against Intellectual Property”, Philophy and Social Action, 1995, Vol. 21, No. 3, pp. 7–22; N. S. Kinsella, Against Intellectual Property, Ludwig von Mises Institute, Auburn (USA) 2008 (first published in 2001); W. J. Gordon, “Moral Philosophy, Information Technology, and Copyright – The Grokster Case”, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge (UK) 2008. 41 D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, p. 145. 42 ibid. 43 N. S. Kinsella, Against Intellectual Property, 2008, pp. 36–43.

18.4 Critical analysis of legal protection of economic author’s . . .

399

An author or inventor has the right to own and personally use the intellectual products he has created. He has been able to create them, however, not only through his effort but also because of some social circumstances beyond his control (e.g. educational opportunities). We are inclined, in most cases, to ignore this social aspect of creativity and to expect the benefits of our individual creativity to be in a reasonable proportion to the workload we have invested and the risk we have undertaken. Each musician or writer, when creating new works, is consciously or unconsciously using the cultural heritage of humanity. Thus, his works contain the achievements of his predecessors, no longer protected by copyright. Does the author of those works have a moral right to exclusively possess them? The John Locke’s theory of property justifies the right of an individual to own part of the shared resources only if he puts in it sufficient amount of his own work; some thinkers suggest that this requirement is satisfied by the author of a copyrighted or patented work because someone who makes a derivative work, building on the public domain, gains no rights in the underlying material44. According to Georg W. F. Hegel, a book should be the property of its author, because it is a form of his self-expression. However, consistent adherence to this conviction is facing significant difficulties: copyright protects also pieces of literature or music, which are automatically generated by a computer, and therefore have little to do with the self-expression of the computer operator45. The need for legal protection of IP is sometimes justified by the creator’s right to privacy. It seems, however, that privacy is protected better by not disclosing information than by owning it. Moreover, the right to privacy cannot protect trade secrets because a company is not a person. According to the advocates of utilitarianism, legal protection of IP serves better the progress of technoscience than its absence. It seems that this may be true in a long term, but not necessarily in a short term because protection of IP conflicts with openness being one of the Mertonian values of science46. According to the utilitarian substantiation of IP, the “end” of encouraging innovation and creativity sufficiently justifies a “means” which is morally doubtful limitation of people’s freedom to use their material property47; moreover, legal protection of IP contributes to the overall increase in the well-being of society even if it brings losses to its part. The American lawyer Norman S. Kinsella (*1965) claims that it is not clear whether legal protection of IP causes any change (increase or decrease) in general well-being of society, just as it is not clear whether this protection is necessary for encouraging creative individuals to innovate. Econometric studies do not clearly

44 W. J. Gordon, “Moral Philosophy, Information Technology, and Copyright – The Grokster Case”, 2008 p. 289. 45 D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, 2007, p. 145. 46 ibid., p. 148. 47 N. S. Kinsella, Against Intellectual Property, 2008, pp. 19–20.

400

18 Legal protection of intellectual property

indicate that the benefits, resulting from increased innovation, exceed the immense costs of legal protection of IP, in particular – the costs of maintaining patents and resolving related conflicts in court. It cannot be excluded that dismantling the patent system could imply an increase in the innovativeness of enterprises because they would spend more money on research-and-development activities48. It is believed that the protection of IP is necessary to promote new ideas and intensify their generation. However, there is a logical contradiction in promoting new ideas by limiting the freedom of their use. On the other hand, there is no logical necessity to constrain the exchange of ideas to the exchange of ideas possessed by individuals or groups of people. Even more controversial is the protection of IP resulting from research projects financed from public funds: the current solutions seem to discriminate the actual research sponsor, i.e. the taxpayer49. The most abstract philosophical argument against IP refers to a theoretical possibility of generating all potential ideas by a supercomputer. This thought experiment enables us to perceive the author of an idea as its discoverer rather than its creator, and to conclude that he should not become its owner, just as the discoverer of a new star does not become its owner. Moreover, by publishing a new idea, we can never be sure, even in the internet age, that nobody has already discovered it.

18.4.2 Argumentation referring to differences between material and intellectual property The American ethicist Richard T. De George (*1933) remarked that the concept of IP is an oxymoron because ideas cannot be possessed in the usual sense of the concept of possessing50. Due to the fundamental differences between material and IP (summarised in Table 18.1), ideas (concepts, discoveries, inventions, recipes, computer programs, artworks, etc.) should not be protected in the same way as material objects: – When an idea is copied, its creator does not lose any material object or right to it; ergo: the theft of ideas, as it is understood with respect to material objects, is impossible. – A material object, being a carrier of an idea, is protected against theft, but its copying does not violate the property right of its owner. – The concept of material property is unambiguous and time-invariant, while the concept of IP is dependent on the arbitrary decisions of the legislator.

48 ibid., pp. 21–22. 49 M. Averill, “Intellectual Property”, [in] Encyclopedia of Science, Technology, and Ethics (Ed. C. Mitcham), Thomson Gale, Farmington Hills 2005, pp. 1030–1034. 50 R. De George, “Information Technology, Globalization and Ethics”, Ethics and Information Technology, 2006, Vol. 8, pp. 29–40.

18.4 Critical analysis of legal protection of economic author’s . . .

401

Table 18.1: Differences between material and intellectual property. Material property

Intellectual property

Material property refers to rare goods which directly relate to specific activities of people who can control them.

Intellectual property refers to goods which may be multiplied unlimitedly.

The creator of a material object is getting to be its owner, but its property rights do not apply to other objects of the same kind (created by other people).

The creator of an immaterial object is getting to be the owner of all its copies (is claiming, in fact, the right to the objects owned by other people).

Theft of material objects is well defined.

Theft of immaterial objects is ill defined.

The existence of material property does not need any intervention of the state (even its existence).

Intellectual property cannot sustain without advanced intervention of the state.

The period of legal protection is the same always and everywhere.

The period of protection is arbitrarily defined by the legislator.

– The legal protection of material property is unlimited in time, while IP is protected only for an arbitrarily defined interval of time; it is hard to imagine that this protection could last infinitively because subsequent generations would then be exposed to growing restrictions on the use of their property.51 Since the copyright system entitles the author of a novel to copy every sequence of characters found in any physical embodiment of this novel, the buyer of this physical embodiment becomes the owner of paper and printing ink only, but not of a novel; he has no right to make a copy of it, even using his own property – his own paper and ink. Similarly, the patent system authorises the inventor, i.e. the author of a patented invention, to stop other persons from implementing it, even if those persons are ready to use their property for this purpose. In this way, copyrights and patent rights, to some extent, transfer the ownership of physical objects from their natural owners to the holders of those rights.52 The distinction between protected and unprotected works is, by necessity, arbitrary: mathematical theorems, philosophical or scientific concepts are not and cannot be protected by patent law because such protection would lead to the paralysis of economic life. An attempted indication of the logical boundary between protected and unprotected works is the distinction between unpatentable discoveries and patentable inventions, based on the observation that a philosophical or scientific discovery, that consists in formulation of a previously unknown law of nature, has not been created by the discoverer. Unfortunately, this distinction is neither sharp nor rigorous. In fact, it is very difficult to identify any significant difference between the

51 N. S. Kinsella, Against Intellectual Property, 2008, pp. 27–28. 52 ibid., pp. 14–15.

402

18 Legal protection of intellectual property

intellectual effort of an inventor who receives for this effort a prize in the form of a patent, and the effort of the creator of a theory, who cannot receive such a prize 53. The basic social function of property rights, in the case of material goods, is to prevent conflicts related to their rarity; those rights can fulfil this function if the ownership boundaries are defined in a clear, impartial and unambiguous way. The extension of this reasoning on ideas is problematic because they are not rare: a single idea can be used by an unlimited number of users at the same time. Since there is no economic rarity, there is no source of conflict, and there is no justification for introducing exclusivity because it can create the state of artificial rarity of the ideas under consideration .54 There are attempts to justify the legal protection of IP, based on its interpretation in terms of a contract between the creator and the users of a work. They are, however, problematic because this contract binds only the contracting parties, while laws defining protection of IP apply to all citizens of a given state. Even if it could be assumed that purchasing a book is equivalent to the conclusion of a contract between the copyright holder of this book and its buyer, this would not imply contractual obligations towards third parties55. If, for example, the wife of the recipient of a book of poems would read and remember one of them, then she could publish the poem under her name without violating the contract concluded by her husband. The adherents of the contractual justification of the legal protection of IP argue that people have the right to be paid for the “fruits of their work” provided they have previously concluded an appropriate contract with (potential) consumers of these “fruits”. Unfortunately, the persons, who infringe copyright by copying a protected work, have never signed such a contract with its author; the latter, if he has sold his work, bears the risk of loss implied by the dissemination of its copies by unauthorised persons .56

18.4.3 Critical analysis of copyright Since copyright does not protect the contents of scientific works, increasingly more scholars abstain from disclosing their own ideas and results of scientific research before publishing them. This is because someone who publishes ideas acquired in a private conversation, without indicating their origin, is not violating the law – he is “only” betraying the values of scientific community. Even with respect to the form

53 ibid., pp. 23–25. 54 ibid., pp. 28–32. 55 ibid., pp. 45–47. 56 W. J. Gordon, “Moral Philosophy, Information Technology, and Copyright – The Grokster Case”, 2008, p. 285.

18.4 Critical analysis of legal protection of economic author’s . . .

403

of expression, the protection of a scientific work may be problematic when the corresponding copyright is held by the editor, not by the author of this work. The institutional plagiarism is a copyright-related problem which applies not only to scientific works. It consists in attribution of a work created by an employee (or a group of employees) to a representative of the employer, where the term employee is used to designate a person producing a creative work on the basis of a contract with an institution (or a person) called employer. Example 18.7: High-ranking politicians, as a rule, do not disclose the names of the experts who prepare their public speeches – even if those speeches appear in press. The exchange of benefits between politicians and de facto authors of their speeches, usually guaranteed by appropriate contracts, effectively reduces the likelihood of litigation about copyright infringement. The practice of delivering presentations prepared by a whole research-and-development group, under a single name of the head of this group, is another example of institutional plagiarism.

The defence of the existing copyright system is strongly motivated by the financial profits of the authors, publishers and distributors of copyright-protected works, as well as by the state’s income from the taxation of their activities. Those profits are usually justified by the necessity to support, encourage and promote creative activities. It has turned out, however, that the share of the authors in the total benefits is sometimes dramatically small, e.g. the share of music creators usually does not exceed 10%, and the major part of benefits goes to the intermediaries who have nothing to do with artistic creativity.

18.4.4 Critical analysis of patent system Patents are meant to serve, and they sometimes serve, the promotion of innovation. There have been, however, numerous examples of using them to block innovation. Example 18.8: For more than hundred years, the AT&T Corporation bought telecommunications-related patents to secure a monopoly on the telephone market. In the first half of the twentieth century, this practice delayed the uptake of radio in the USA, in the second – the development of mobile telephony. Similar policy of the American conglomerate of corporations General Electric, implemented in the field of fluorescent lighting, delayed the application of this lighting technology. The patents concerning transgenic forms of plant species, such as soy or cotton, blocked for 20 years the studies carried out by research centres which had no patents and forced the Third World population to buy licences for the processing of these plants, even though they had been used by their ancestors for centuries 57.

The costs of patent protection, contrary to widespread opinions, can be very high.

57 In the USA, a gene sequences can be patented if its isolation requires the use of agents not found in nature.

404

18 Legal protection of intellectual property

Example 18.9: It is today an exception rather than a rule, as it used to be in the past, that an inventor completes all the formalities related to patenting by himself: he is usually supported by a patent attorney 58. The costs of patenting are, therefore, significantly higher than the total amount of fees charged by a patent office. In the USA, those costs level to USD 15,000–25,000 per patent, in the European Union – EUR 20,000–40,000; in both cases, they depend on the complexity of the invention and the number of claims. Official fees for maintaining a single patent for 20 years in the USA amount to no less than USD 10,000, while in the European Union they depend on the number of countries in which protection is to be provided; so, may be significantly higher.

The costs of obtaining and maintaining a patent are usually included in the price of a product, whose manufacturing requires the use of the patent, and finally covered by the buyers of this product. This is, however, only one component of the total costs of maintaining the patent system to be covered by society; there are still costs of administering the system, losses entailed by the non-use of patented inventions, the costs of parallel research, resulting from non-disclosure of inventions submitted for protection for at least 18 months, and the costs of research aimed at “circumventing” patents. Legal disputes around patents may also be very expensive. Example 18.10: The University of California San Francisco (UCSF) spent USD 20 million on a 1990– 1999 litigation with the biotechnology corporation Genentech, Inc. which developed the drug Protropin used for treating dwarfism. The UCSF accused Genentech of infringing the patent for a synthetic growth hormone, granted to the UCSF in 1982. Although the infringement was not definitely ascertained, Genentech agreed to pay USD 200 million to settle the dispute 59.

In the Netherlands, the doubts concerning the economic effects of patent protection appeared already in the nineteenth century: consequently, this protection was suspended there for several decades. On the other hand, a review of the patent system in Canada led in 1971 to the conclusion that this system is beneficial, first of all, for foreign (especially American) entrepreneurs because it is enabling them to control the import of goods to the Canadian market, while its stimulating impact on the development of the Canadian industry is quite problematic60. The market value of patents may conflict with Mertonian values of science. Having prospects for obtaining a patent of high market value and being afraid of unfair competition, many researchers delay the publication of advanced research results, or publish them in an incomplete form.

58 For the definition of this profession in various countries, cf. the Wikipedia article “Patent Attorney” available at https://en.wikipedia.org/wiki/Patent_attorney [2018-04-27]. 59 M. Barinaga, “Genentech, UC Settle Suit for $200 Million”, Science, 1999, Vol. 286, No. 5445, p. 1655. 60 Report on Intellectual and Industrial Property, Economic Council of Canada, Information Canada, Ottawa January 1971.

18.4 Critical analysis of legal protection of economic author’s . . .

405

Example 18.11: In 1989, the American electrochemist Stanley Pons (*1943) and the British electrochemist Martin Fleischman (1927–2012), falsely convinced that they succeeded in making nuclear fusion at low temperatures, have announced their achievement at a press conference, not in a peer-reviewed journal. They were afraid that the details published in a regular research article could be quickly used by an experienced reader for quickly preparing a successful patent application.61

A patent protecting an invention gives its holder a negative right to eliminate competitors, even if they are able to independently make the same invention. When the competitors are eliminated, a monopolist manufacturer of the products, in which this patent is applied, may augment profits by increasing prices and decreasing the quality or diversity of those products. In the case of complex products, the corresponding know-how protected by patents may be fragmented, and therefore not suitable for effective industrial use. The patent protection of software (which is possible only in the USA) seems to inhibit rather than stimulate innovations. Every significant change in a patented program, especially an innovative change, requires additional payment for extending the patent according to the continuation-in-part formula62. The boundaries between what is patentable and what is not are blurry; legal decisions in specific cases are often contradicting common sense.

Example 18.12: In 1981, the US Supreme Court ruled that mathematical formulae and computational algorithms are not patentable, whereas computer programs are patentable because they are human products that serve useful operations on a production line or in a laboratory; this decision must astonish because the transformation of mathematical formulae and computational algorithms into corresponding computer programs does not require much creativity; in many cases, it can be done automatically63.

One can patent only a “practical” application of a new idea, not the idea itself. This is probably one of the reasons explaining the decreasing interest of research-and-development (or even academic) institutions in theoretical (cognitive) projects, as well as decreasing funding of such projects. The amendments, introduced by the US Congress to the patent law in the years 1980–1986, opened the door to the commercialisation of research results obtained within the projects financed from federal funds; since then, the interest of American

61 N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, 2002, p. 242. 62 For the definition of this formula, cf. the Wikipedia article “Continuing patent application” available at https://en.wikipedia.org/wiki/Continuing_patent_application#Continuation-in-part [2018-04-27]. 63 cf. D. B. Resnik, The Price of Truth – How money Affects the Norms of Science, 2007, p. 150.

406

18 Legal protection of intellectual property

universities in patenting and licensing patents has been steadily growing64. It is not obvious, however, that such a change in the proportion between basic and applied research is in the longer run favourable for the functioning of civilised society; after all, it has been noted on various occasions that “there is nothing more practical than a good theory”. Many patents are obtained and maintained by industrial entities with the only purpose to block their competitors and to maintain the market of their own products, which means enormous social costs without any innovative effect. According to the defenders of the existing patent system, the income from patent licensing is stimulating individual creativity, but the industrial reality shows that most patents do not bring substantial economic benefits to the inventors since the patents belong to their employers.

18.5 Future of legal protection of intellectual property The growing criticism of the existing legal protection of IP does systematically reveal its shortcomings and its maladjustment to the needs of the knowledge society. The change of the system of protection seems to be unavoidable; in fact, it has already started. It includes not only accelerated harmonisation of legal solutions on a global scale but also modification of certain paradigms this system is based upon. Desirable directions of change are indicated by numerous organisations and social movements that are critical with respect to the currently binding rules of legal protection of IP. The longest tradition is behind the Free Software movement, operating since 1985 around the organisation called the Free Software Foundation, whose aim is to extend the freedom to use, copy, modify and distribute computer programs, and to defend – at the same time – the rights of their creators. A characteristic feature of the programme of this movement is ethical (deontological) argumentation referring to the freedom as a basic human right65. According to the Free Software movement, the only “free” software is distributed under a General Public License (GPL) that guarantees its user the following rights: – the freedom to run the program for any purpose (freedom 0); – the freedom to access the source code of the program and adapt it to the needs (freedom 1); – the freedom to further distribute the program (freedom 2);

64 A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, 2015 (3rd edition), p. 178. 65 P. Ball, “Free Software”, [in] Encyclopedia of Science, Technology, and Ethics (Ed. C. Mitcham), Thomson Gale, Farmington Hills 2005, pp. 793–796.

18.5 Future of legal protection of intellectual property

407

– the freedom to access the source code of the program, to improve it and to distribute after improvement (freedom 3).66 A holder of any GPL licence is obliged to apply the same licensing conditions when further distributing the received program or its version resulting from modification (according to the definition of freedom 1 or freedom 3). Paradoxically, the legal enforceability of GPL licences is guaranteed by copyright. Thus, nobody can claim ownership rights to a program distributed on the basis of such a licence; nobody can otherwise limit the freedom of its further distribution. Against a licensee who has made such an attempt – e.g., refused to allow a licensed program to be modified (according to the definition of freedom 1 or freedom 3) – the licensor can make claims based on copyright. Some programs, distributed under a GPL licence, are very popular not only because of their free-of-charge availability but also because of their utility (functionality, reliability, performance).67 Example 18.13: Linux is a family of operating systems built around the Linux kernel first released in 1991 by the Finnish-American software engineer Linus Torvalds (*1969). This kernel, distributed under a GPL licence, has been used for the development of many operating systems – highly valued by IT professionals – implemented in supercomputers, personal computers, microcomputers and mobile phones.

The Open Source movement, initiated in 1998, is also opting for greater freedom of software circulation and use. Its approach is, however, more pragmatic than that of the Free Software movement since it does not object to the following actions which are unacceptable for the Free Software movement: – further licensing of a program under terms and conditions different from those of the licence under which it had been originally made available, in particular – granting paid licences for programs obtained free of charge; – using a program, made available under a Free Software licence, for the development of commercial programs.68 The Open Access movement has been operating since 1990; its most important programme assumptions have been defined in several declarations and statements69. These assumptions include, first and foremost, making research results publicly available in such a way as to enable everyone to copy, read, search and cite them, 66 S. Chopra, S. Dexter, Decoding Liberation – The Promise of Free and Open Source Software, Routledge, New York – London 2008, p. 39. 67 ibid., p. 46. 68 H. T. Tavani (Ed.), Ethical Issues in an Age of Information and Communication Technology, Wiley & Sons, Hoboken (USA) 2007, Section 8.7.2. 69 cf. the Open Access Directory, available at http://oad.simmons.edu/oadwiki/Main_Page [201804-18].

408

18 Legal protection of intellectual property

as well as their author to freely dispose of them. The Open Access movement promotes open licences for this purpose and free access to electronic resources of scientific information. A free licence for using a work is granted by the licensors in such a way that the licensee is entitled to freely use and process this work for any purpose, to reproduce and distribute it free of charge or for remuneration, and to prepare and manage the derivative works. Although not all the doubts about the legality of this method of sharing works protected by copyright have been removed, the number of peer-reviewed scientific journals, operating under Open Access principles, is growing steadily. The usual rules of their operation are the following: – The author (or his parent institution or his sponsor) covers the publication costs, and the text is made available on the internet. – Neither readers nor their parent institutions have to pay for the access to an article, a report or a book. – Everyone can use those works (for non-commercial purposes), i.e. to read, print and copy them, as well as to send them to others. – The author partially retains his copyright, which is secured by certain additional requirements addressed to the recipients. From an ethical point of view, the Open Access system has two important advantages: it is a way of promoting openness in science and a way of creating an opportunity for poorer scientific institutions (e.g. in developing countries) to follow the latest achievements of world science.

Example 18.14: The private organisation Public Library of Science (PLoS) is a publisher of seven open-access journals in the field of biology and medicine70. Numerous renowned scientific journals – operating in the same field, e.g. New England Journal of Medicine71 – use the Open Access system selectively: they provide, free of charge, electronic versions of several articles per issue.

The number of open-access journals started to grow rapidly ca. 15 years ago when profit-oriented publishers realised that running such journals may be an opportunity for earning money on science. In 2010, the term predatory publishing was coined to label low-quality, amateurish and unethical academic publishing done for profit. Taking into account the harmfulness of this practice to scientific community, it is important to be able to identify predatory publishers and to avoid publishing in their journals. A comprehensive paper presented at the librarians’ conference in 201772

70 cf. the website of PLoS, located at https://www.plos.org/publications [2018-04-30]. 71 cf. the website of New England Journal of Medicine, located at http://www.nejm.org/ [2018-04-30]. 72 M. Berger, “Everything You Ever Wanted to Know About Predatory Publishing but Were Afraid to Ask”, Proc. ACRL Conference of American Library Association (Baltimore, USA, March 22–25, 2017), pp. 206–217.

18.5 Future of legal protection of intellectual property

409

contains a very informative catalogue of characteristics of such journals, which may be used for their identification. Here are the most indicative among them: – spam emails sent to potential authors, written with fawning language, lacking connection to the recipient’s discipline and specialty; – the promises of fast peer review and fast publication; – the lack of focus in subject-matter or subject-matter extremely broad; – the lack of transparency about author fees; – the list of editors who are not editors because their names appear without their knowledge or involvement; – the editorial board lacking academic affiliations of its members; – a very high number of papers per issue and lack of transparency about peer review; – the journal name very similar to the name of an existing renowned journal; – impossible revision or retraction of manuscripts; – contradictory or missing information about location of the editorial office; – missing, stolen or faked identifiers such as DOI or ISSN; – false and fake bibliometric indicators; – false claims of indexing and inclusion in databases. The following two examples are provided to illustrate two aspects of harmfulness of predatory publishing. Example 18.15: The author of the 2013 paper “Who’s Afraid of Peer Review?”73 describes his investigation of peer review among fee-charging open-access journals. He submitted fake scientific manuscripts to 304 of them. Although the manuscripts were designed with such grave and obvious scientific flaws that they should have been rejected immediately by the editors and peer reviewers, 60% of the journals accepted them.

Example 18.16: Many predatory publishers, hoping to cash in, aggressively and indiscriminately recruit academics to build legitimate-looking editorial boards of their open-access journals. The authors of the 2017 paper “Predatory Journals Recruit Fake Editor”74 submitted a fake application for an editor position to 360 journals, a mix of legitimate titles and suspected predators; 48 of them accepted despite the fact that the name of the candidate was Dr. Anna O. Szust (which in spoken Polish means Dr. Anna Fraudster), and her CV informed about fake scientific degrees and credited her with spoof book chapters.

The main institutional advocate and populariser of the Open Access in technoscience is the American non-profit organisation Creative Commons (founded

73 J. Bohannon, “Who’s Afraid of Peer Review?” 2013. 74 P. Sorokowski, E. Kulczycki, A. Sorokowska, K. Pisanski, “Predatory Journals Recruit Fake Editor”, Nature, March 2017, Vol. 543, pp. 481–483.

410

18 Legal protection of intellectual property

in 2001) whose mission is characterised on its website in the following way: “From lawyers and academics to musicians and artists, we’re lighting up a global commons of openness and collaboration, sharing content and community around the world”. Creative Commons provides free, easy-to-use copyright licences to make a simple and standardised way to give the public permission to share and use various kinds of creative works. Those licences do not replace copyright, but they are based upon it. They refer to the principle “some rights reserved” rather than “all rights reserved”, and the copyright owner himself determines which rights are reserved; he has a choice of seven typical licensing options which differ in the combination of the following four conditions: – Licensees may copy, distribute, display and perform the work, as well as make derivative works and remixes based on it, only if they give the author or licensor the credits in the manner specified by these. – Licensees may distribute derivative works only under a licence not more restrictive than the licence that governs the original work. – Licensees may copy, distribute, display and perform the work, as well as make derivative works and remixes based on it, only for non-commercial purposes. – Licensees may copy, distribute, display and perform only verbatim copies of the work, not derivative works and remixes based on it. Thus, all the licences, defined by Creative Commons, grant the basic right to distribute the copyrighted work worldwide for non-commercial purposes and without modification. An interactive tool for choosing the adequate licensing option is provided on the Creative Commons website.75 Two factors – the digitisation of musical, visual and literary works, and the widespread access to the internet – enormously increase the number of people who can copy such works and, consequently, reduce the costs of their availability. At the same time, however, the internet increases the risk of illegal copying, especially by means of tools such as the Napster or Grokster systems. It is, therefore, difficult to assess what is the balance of the creators’ benefits resulting from the increase in the total value of their works and the creators’ losses due to illegal copying of those works76.

75 cf. https://creativecommons.org/ [2018-04-30]. 76 cf. W. J. Gordon, “Moral Philosophy, Information Technology, and Copyright – The Grokster Case”, 2008, p. 277.

18.5 Future of legal protection of intellectual property

411

Example 18.17: The system Napster, created by Shawn Fanning and Sean Parker in 1999, allowed its users to search, purchase and download audio files. A huge database of such files (mainly containing music) was created as a result of the exchange of files by its users. The Napster servers kept only data about users and their files. The role of the operators of those servers was limited to “traffic control”, i.e. to facilitation of mutual communication among the Napster users; the exchange of the files was performed directly among the users’ computers without any involvement of the operators. However, the Napster file-sharing service didn’t last long due to the lack of control over the transfer of copyrighted material across its network. The Napster’s operations were soon detected by the Recording Industry Association of America who filed a lawsuit against it for the unauthorised distribution of copyrighted material. After a long court battle, the Association eventually obtained an injunction from the courts which forced Napster to definitely shut down its network in 2001. However, after a series of acquisitions and other business transformations, Napster reappeared in 2017 as a provider of music on demand.77

Example 18.18: In 2001, the privately owned software company Grokster Ltd. launched a file-sharing system that seemed to be less vulnerable to legal allegations of copyright infringement than Napster, as not only file exchange was carried out in this system bypassing the main server but also communication between its users. Such an arrangement immunised Grokster which – being the service provider – neither did directly infringe copyright, nor could be considered a helper, as being ignorant about the information on files exchanged by system users. Due to the fact that copying took place between the computers of two users without any mediation of the system server, it was virtually impossible to control this operation and to introduce any payment for it; moreover, the service provider could not be aware of any infringement; so, could not be kept responsible for it78. However, in 2005 the US Supreme Court found a way to make Grokster disactivate the system: ruling in a lawsuit filed by MGM Studios Inc., it decided that the vast majority of files exchanged by the Grokster system were copied illegally, and therefore the 1984 precedent79, relating to the equipment for copying and distribution of recorded music, does not apply because such equipment is assumed to be used in a manner consistent with copyright.80

The operation of the Napster and Grokster systems affected mainly the artistic business, while the operation of the pirate website Sci-Hub is directly related to the functioning of technoscientific communities and publishing business.

77 Its start page is located at https://us.napster.com/ [2018-04-30]. 78 W. J. Gordon, “Moral Philosophy, Information Technology, and Copyright – The Grokster Case”, 2008, p. 281. 79 The case of Sony Corp. of America vs. Universal City Studios, Inc. where it had to be resolved whether it is legal to forbid the use of technology merely because it could result in copyright infringement; cf. https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Universal_City_Studios, _Inc. [2018-05-01]. 80 cf. H. T. Tavani (Ed.), Ethical Issues in an Age of Information and Communication Technology, 2007, pp. 232–234.

412

18 Legal protection of intellectual property

Example 18.19: Sci-Hub is “the first pirate website in the world to provide mass and public access to tens of millions of research papers”81. The Sci-Hub library contains ca. 70 million papers which have been acquired, most probably, from digital academic libraries through their legitimate users. It is operated by the neuroscientist Alexandra Elbakyan who created it in 2011 as a 22-year-old graduate student in Kazakhstan. Over the 6-month sample period, from September 2015 to February 2016, Sci-Hub served up 28 million documents from all regions of the world, in particular: ca. 2.6 million download requests came from Iran, ca. 2.3 million from China, ca. 1.9 million from India, ca. 0.9 million from Russia and 0.7 million from the USA82. The latter number is showing that many users having access to research papers through their academic libraries turn to Sci-Hub for convenience rather than necessity. For obvious business reasons, commercial publishers strongly oppose Sci-Hub: in 2015, Elsevier filed a lawsuit against Sci-Hub, in Elsevier et al. vs. Sci-Hub et al., at the US District Court for the Southern District of New York; in June 2017, the court awarded Elsevier USD 15 million in damages for copyright infringement by Sci-Hub in a default judgment. Alexandra Elbakyan argues that commercial academic publishers exploit students, scientists and society in general: they do not generate the scientific results reported in their journals, yet they reap exorbitant financial returns. She denies the legitimacy of laws that regard scientific knowledge as private property. She cites Article 27 of the Universal Declaration of Human Rights: “Everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.” According to Ted W. Lockhart, it is debatable whether “so general a principle can justify so specific an enterprise”83.

In response to the increase in global trade of counterfeit goods and pirated copyright-protected works, industrial concerns – whose business depends on copyright, patents and trademarks – have made several attempts to tighten legal regulations concerning protection of IP, both at the national and international levels. Most known among them was an almost successful action aimed at establishing international standards for IP rights enforcement by means of the Anti-Counterfeiting Trade Agreement (ACTA)84. In 2011–2012, this agreement was signed by 31 countries, including the USA and 22 EU countries, but only the Japanese Parliament ratified it. The parliaments of other signatories refrained from ratification under pressure of citizens protests against the ACTA perceived as danger for such fundamental rights as freedom of expression and privacy. The world-wide discussion, that followed those protests, included evaluation of hypothetical benefits and losses that could be implied by tightening legal regulations concerning protection of IP. That discussion has significantly contributed to better understanding of the

81 as announced on the Sci-Hub website located at https://sci-hub.nu/ [2018-06-14]. 82 J. Bohannon, “Who’s Downloading Pirated Papers? Everyone”, Science, 2016, Vol. 352, No. 6285, pp. 508–512. 83 T. Lockhart, “Sci-Hub: Stealing Intellectual Property or Ensuring Fairer Access?”, SIAM News, January 17, 2017, https://sinews.siam.org/Details-Page/sci-hub-stealing-intellectual-property-or-en suring-fairer-access [2018-06-14]. 84 The details may be found in the Wikipedia article “Anti-Counterfeiting Trade Agreement” available at https://en.wikipedia.org/wiki/Anti-Counterfeiting_Trade_Agreement [2018-05-01].

18.5 Future of legal protection of intellectual property

413

necessity to take into account the interests of various individual and collective “stakeholders” (creators, consumers, business, industry, government, social services, etc.), total costs of implementation of tightened regulations, including material costs of pursuing infringements and the moral consequences of limited home and computer privacy. The system of IP protection must be reformed, but the search for new solutions should be informed and guided by the right balance of public and private benefits, and of short-term and long-term benefits. From this point of view, radical solutions – such as complete elimination of the legal protection of IP and introduction of regular remuneration for creators or regular prices for their works85 – seem doomed to failure. The system of legal protection of IP is a product of Western culture which highly values individual freedom and individual creativity. The traditions of Far East (China, Japan and Korea) have been always more oriented on social values, such as collective solidarity and obedience. In those traditions, copying masters’ works was considered the best way to learn from the masters and express reverence for them. For those reasons, the process of joining the global system of IP protection by China proceeded in a very slow manner, and it was eventually accelerated by drastic political and legal solutions86. It seems, however, that the economic success of the Far-Eastern “tigers” should become the subject of deeper analyses aimed at identifying the tradition-related factors that contributed to this success. The conclusions of such analyses, in turn, could prove to be an inspiring premise for the reform of the global system of legal protection of IP.

85 A. D. Moore, “Personality-Based, Rule-Utilitarian, and Lockean Justifications of Intellectual Property”, 2008, sections 5.3.2 and 5.3.3. 86 R. M. Davison, “Professional Ethics in Information Systems: A Personal Perspective”, Communications of AIS, 2000, Vol. 3, No. 8, pp. 1–33.

19 Ethical issues implied by information technologies Information technology (IT) will be understood in this chapter as “the technology involving the development, maintenance and use of computer systems, software and networks for the processing and distribution of data”1. Hardware and software products of IT are inseparable elements of the technoscientific research infrastructure. Computers and computer networks are used to process technoscientific and formal data (information), to control experiments and process their results, as well as to edit reports and publications, while telecommunication means enable the exchange of those data and organisation of research processes, conferences and remote experiments. These are important reasons behind inclusion of selected elements of IT ethics in this book devoted to methodological and ethical aspects of technoscientific research. After introductory considerations in Section 19.1, the following issues will be covered in the consecutive sections: an outline of ethical issues related to the use of IT means by modern society (Section 19.2), general characterisation of basic problems related to their use in technoscience (Section 19.3) and a more detailed overview of ethical issues related to the use of the internet (Section 19.4).

19.1 Information technology in the age of globalisation Information technology is a product of technoscientific progress, which in turn decisively contributes to the acceleration of this progress. Since wide access to IT means is an important competitive advantage of rich countries having high GDP per capita, it can deepen the socio-economic divide between the richest and poorest countries of the world2. On the other hand, however, IT development is an opportunity for the latter to solve many previously unsolvable problems since it entails creation of a huge number of relatively low-cost jobs. India is a country which has taken advantage of this opportunity in the most spectacular way: over the last two decades many IT-based companies – providing programming, accounting and logistic services – have been established there3. They often work for large international corporations, but they employ economists and IT engineers educated in India. For many Indian Ph.D. holders, emigration ceased to be the only viable

1 the definition from the online version of Merriam-Webster Dictionary available at https://www. merriam-webster.com/dictionary/information technology [2018-05-02]. 2 J. van den Hoven, E. Rooksby, “Distributive Justice and the Value of Information – A (Broadly) Rawlsian Approach”, 2008, pp. 376–377. 3 R. De George, “Information Technology, Globalization and Ethics”, 2006. https://doi.org/10.1515/9783110584066-019

416

19 Ethical issues implied by information technologies

career option for undertaking scientific work since now they can participate in remote IT-based research. In a long run, wide access to information will become a civilisational opportunity not only for India, but also for some other less-prosperous countries. Already 10 years ago, the authors of the book Distributive Justice and the Value of Information argued that information carriers (including electronic databases such as repositories of scientific literature) belong to primary goods as defined by John B. Rawls (cf. Subsection 12.2.7), and therefore: – free access to that information, which is essential for rational life planning, should be considered a fundamental freedom in the sense of his first principle of justice. – the chances of getting this information, like the chances of getting education, should be allocated in accordance with his second principle of justice.4 This way of thinking coincides with the postulates of the Open Source and Open Access movements (cf. Subsection 18.4.3). The global impact of those movements, as well as the social consequences of the spread of the internet in such countries as India and China, show how far-reaching the effects of implementing the abovementioned ethical principles in practice can be. Since those effects may be both positive and negative – the same scientific information, e.g. information concerning nanotechnology, can be used both for developing new anti-cancer medicines and for manufacturing new weapons of mass destruction – it is necessary to take measures in advance to prevent the negative consequences of global information opening. According to the American political theorist Michael Walzer (*1935), the meaning of various goods is relative to their socio-cultural context, and therefore their allocation is subject to different rules in different spheres of life, e.g. medical services are provided according to the needs, political positions are distributed according to available vacancies, and the flow of money is governed by the principles of free-market economy. In all cases, however, a new social good G1 should not be allocated to someone who possesses another good G2, merely because he does possess G2, without regard to the meaning of G15. This principle should apply to the sphere of information: confidential medical data should not be placed on the market, and criminal data should not be used to determine eligibility for medical treatment6. It seems that this principle should apply universally when IT means are used, in particular – for research purposes. IT means are already used by researchers in all scientific disciplines, and the percentage of those who are not yet “addicted” is systematically decreasing. The global resources of information, including technoscientific information, are growing 4 J. van den Hoven, E. Rooksby, “Distributive Justice and the Value of Information – A (Broadly) Rawlsian Approach”, 2008, pp. 380–386. 5 ibid., p. 386. 6 ibid., p. 394.

19.2 Overview of ethical issues related to use of information technology

417

exponentially, and the tools for collecting, storing and processing this information are developing rapidly. Those tools are becoming sine qua non elements of research infrastructures, and therefore the need for ethical reflection on moral problems associated with their use, both in personal and professional life, is more and more frequently realised by various research communities7. The moral problems, related to the use of IT means in technoscientific practice, mirror some moral problems contemporary society is exposed to because of the ubiquity of those means in everyday life. An insightful analysis of the latter is provided in the 2018 book Evil Online8, devoted mainly to the ubiquitous use of the internet. Its authors suggest that a growing sense of moral confusion (“moral fog”, as they call it), generated by internet-based services, may push otherwise ordinary people towards evildoing, and endanger such moral values as autonomy, intimacy, trust and privacy.

19.2 Overview of ethical issues related to use of information technology IT-related ethical issues may be considered in the framework of information ethics, as proposed by the Italian philosopher Luciano Floridi (*1964)9, or of technoethics, as postulated by the Argentinian philosopher of science Mario A. Bunge (*1919)10. Three main chapters of technoethics, directly related to IT, have developed over last 70 years, viz. computer ethics, internet ethics and cyberethics and media and communication technoethics. It should be, however, noted that some authors are inclined to consider the concept of cyberethics as equivalent to IT ethics in general11. The interest in IT-related ethical issues may be traced back to the late 1940s when computers started to be used for technical design purposes. In 1950, Norbert Wiener – an American mathematician and philosopher, the father of cybernetics – published a book entitled The human use of human beings12 which is referred to as the first textbook of computer ethics. According to the definition of computer ethics, proposed in

7 An indirect confirmation of this claim, in the form of a large collection of resources and projects devoted to IT ethics, may be found at https://wiki.gpii.net/w/ICT_Ethics [2018-05-03]. 8 D. Cocking, J. van den Hoven, Evil Online, Wiley & Sons, Oxford (UK) 2018. 9 cf. L. Floridi, “Information Ethics – Its Nature and Scope”, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge (UK) 2008, pp. 40–65; L. Floridi, “Foundations of Information Ethics”, [in] The Handbook of Information and Computer Ethics (Eds. K. E. Himma, H. T. Tavani), Willey & Sons, Hoboken (USA) 2008. 10 R. Luppicini, “The Emerging Field of Technoethics”, [in] Handbook of Research on Technoethics (Eds. R. Adell, R. Luppicini), IGI Global – Information Science Reference, Hershey – New York 2009. 11 e.g. H. T. Tavani (Ed.), Ethical Issues in an Age of Information and Communication Technology, 2007, p. 3. 12 N. Wiener, The Human use of Human Beings: Cybernetics and Society, Free Association Books, London 1989 (first published in 1950).

418

19 Ethical issues implied by information technologies

1989 by the American philosopher Terrell W. Bynum (*1941), its main task is the recognition and analysis of the impact of IT on the implementation of social values such as health, property, freedom, democracy, knowledge, privacy and security13. Computer ethics deals with morally significant situations in which computer technology plays an essential role, and at the same time there is uncertainty as to how to proceed or even how to understand those situations14. The most frequently addressed issues of computer ethics are: protection of data confidentiality, protection of intellectual property, fair access to information and reliability of operations performed on information. Both individual computers and computer networks (local, regional and global) can be used to commit common crimes, to violate privacy or to conduct espionage. They can cause damage to individuals or institutions if the software does not work as expected, because of a programmer’s error or his deliberate action; they can perform sophisticated logical and mathematical operations whose correctness cannot be checked by anybody. Computer ethics is to determine to what extent and in what situations we can trust computer systems, and what restrictions should be imposed on their use. IT means can be used for performing illegal operations, being serious ethical violations, which are, however, neither specific to the scientific community nor very frequent among its members; therefore, they will not be discussed here. The spectrum of IT-related ethical issues is systematically expanding with the development of IT, and it is increasingly more and more identified with the spectrum of ethical issue related to the infosphere. This is because the social impact of the IT development is becoming more and more recognised, and – consequently – ethical demands regarding the “ecological” management of the infosphere are more and more explicitly articulated. The relevant public debates focus on such issues as: – the IT-induced evolution of the social context in which ethical problems, unrelated to technology, appear and are solved; – the increase in the objective scope and subjective sense of anonymity of operations performed on information; – the fuzzification of responsibility and increased sense of impunity for unethical processing of information. The diversity of IT-related ethical issues and the inevitability of their appearance in everyday research practice are illustrated with the following series of examples. The common source of this diversity and inevitability is the extraordinary logical plasticity of computers15 and computer networks as tools for solving theoretical and practical problems in very diverse areas and in various social contexts.

13 T. W. Bynum, “The Historical Roots of Information and Computer Ethics”, [in] The Cambridge Handbook of Information and Computer Ethics (Ed. L. Floridi), Cambridge University Press, Cambridge (UK) 2010, pp. 20–38. 14 J. H. Moor, “What is Computer Ethics?”, Metaphilosophy, 1985, Vol. 16, No. 4, pp. 266–275. 15 ibid.

19.2 Overview of ethical issues related to use of information technology

419

Example 19.1: Already in the 1980s, computers were used during the US presidential election for counting votes and forecasting the result of the election. Already at that time, the question was asked if it was acceptable that some voters knew the forecasts (disseminated by radio and television) before voting16. This may seem an innocent exercise after scandalous involvement of the UK-based political consulting company Cambridge Analytica in the US presidential election in 2016. This company used in the election campaign personal information about ca. 50 million Facebook users, acquired by an external researcher who claimed to be collecting it for academic purposes. In May 2018, under pressure of public criticism, the company had to stop its operations and declare insolvency 17.

Example 19.2: The use of IT tools for shopping or other financial operations often creates the impression of anonymity comfort. It is deceptive because those tools allow not only a current preview of those operations, but also their recording and further use, e.g. for commercial purposes. The socalled cookies are implanted by the providers of various services in the computers of internet users to identify their commercial (and non-commercial) preferences and expose them to personalised advertising18. Many providers make the availability (or efficacy) of their services dependent on the access to some personal data of the users.

Example 19.3: Paradoxically, electronic mail does not guarantee the same degree of correspondence confidentiality as traditional mail did: at least the administrators of e-mail servers have an easy access to exchanged information, and their “friends” can also get it. Moreover, the disclosure of this information happens very often by accident, e.g., when by negligence a copy of a new letter goes to the addressee of an old letter which was used to edit a new one (the function “Edit message as new”).

Example 19.4: Personal computers are equipped with a mechanism enabling detection of their identification number after connecting to the internet. The intention behind introduction of this mechanism in the 1990s was double: to improve the cooperation between Intel hardware and Microsoft software, and to prevent computer crime. Both companies, however, were severely criticised when the existence of this mechanism was disclosed in 1999 since it can be easily used for violation of the computer’s owner privacy.19

The Dutch ethicist Jeroen van den Hoven (*1957) has proposed to distinguish four categories of IT-related ethical problems. Some of these problems are non-specific because IT is neither necessary nor sufficient for their appearance (e.g. commercial fraud, theft of computer hardware or environmental hazards caused by the storage of toxic waste resulting from computer production). There are, however, three other

16 ibid. 17 O. Solon, O. Laugbland, “Cambridge Analytica Closing after Facebook Data Harvesting Scandal”, The Guardian, May 2, 2018, https://www.theguardian.com/uk-news/2018/may/02/cambridgeanalytica-closing-down-after-facebook-row-reports-say [2018-05-03]. 18 N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, 2002, pp. 71–72. 19 ibid., pp. 96–99.

420

19 Ethical issues implied by information technologies

categories of problems that had been unknown before the age of IT – the problems for whose occurrence the existence of IT is: – necessary but not sufficient (e.g. generation and dissemination of computer viruses, illegal use of electronic databases, epidemic addiction to internet-based social networking); – not necessary but sufficient (e.g. unequal access to electronic databases, irresponsible use of expert systems); – necessary and sufficient (e.g. possibility of constructing an artificial human brain). Till the end of the twentieth century, the development of IT had been dominated by positivistic optimism, and therefore IT-generated ethical issues had been approached from mainly technocratic positions: a more advanced technology had been considered the best remedy for the negative side effects of IT. According to the technocratic approach, the main task of IT ethics is: – the analysis of the impact of IT on the life of society, based on the appropriate application of ethical theories; – the identification of IT-related ethical problems, such as injustice or violation of personal rights; – the formulation of the rules of conduct that minimise the risk of occurrence of new IT-related ethical problems. Since the beginning of the twenty-first century, the understanding of IT-related ethical issues has been evolving towards more humanistic, phenomenological approach. Without negating the assumption that IT tools are indispensable for solving some socio-economic problems, it refers to the observation that IT and society are shaping each other, and therefore the main task of IT ethics is: – identification of attitudes and intentions that caused the creation of IT products as “rational entities”; – disclosure of assumptions, values and interests behind the creation and use of IT products; – ethical analysis of the contents of “black boxes” delivered as IT products to society; – formulation and analysis of IT-related ethical issues in the context of both technical and social problems. Already in the 1980s, James H. Moor stressed the importance of the “invisibility factor” in the analysis of moral problems related to the use of computers20. His observations can be now generalised on the use of other IT means, and thus help us explain the sense of opening “black boxes”, being the essence of the phenomenological approach:

20 J. H. Moor, “What is Computer Ethics?”, 1985.

19.2 Overview of ethical issues related to use of information technology

421

– “invisible” are abuses committed by means of computers connected to the internet; – “invisible” is processing of data, performed by computer networks and supercomputers according to the algorithms which are so complex that nobody is able to guarantee their correctness and reliability; – “invisible” are, finally, the values underlying these algorithms. It should be noted that an “invisible” value may result from a methodological or technical error committed by a programmer, or it may be introduced by him intentionally. Example 19.5: An American system for computerised airline reservations had a bias for the AA airlines flights built in: sometimes an AA flight was suggested by the system even if it was not the best flight available. This bias in the reservation service contributed to the financial difficulties of the AA’s competitor, the BA airlines which went into bankruptcy.21

Example 19.6: According to Catherine H. O’Neil, an American mathematician and financial expert, we live in the age of omnipotent decision-making algorithms. Allegedly, their use should promote social fairness because they are executed by machines, not by humans. There is, however, growing evidence that they may reinforce discrimination. The explanation of this perverse effect may be found in the black-box mathematical models underlying those algorithms: the models imbedded in the algorithms used by teachers for scoring students, by banks for assessing solvency of their customers, by employers for evaluating their workers, by insurance companies for rating health condition of their potential clients, etc. Those models are assumed to describe reality, but in practice they also shape it because they govern human lives.22

The phenomenological assumption about mutual influence of IT and society is justified by the observation that the development of IT is influenced by the needs of society, and the needs of society are modulated by the development of IT. The widespread use of IT services can affect not only the everyday behaviour of individuals and communities, but also their emotional states, attitudes and even psychosomatic condition23. Example 19.7: Numerous studies have confirmed a daily observation that scenes of violence – ubiquitous in the films disseminated via the internet and television, as well as in computer games – have a profound impact on the formation of adolescent imagination and on the incidence of aggressive, antisocial and other irrational behaviours among teenagers24.

As it has been demonstrated by means of examples provided in Chapter 18, the global nature of IT-based exploitation of literary, musical, multimedia works makes

21 ibid. 22 C. H. O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Pub., New York 2016. 23 D. Cocking, J. van den Hoven, Evil Online, 2018, Chapter 3. 24 N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, 2002, pp. 66–67.

422

19 Ethical issues implied by information technologies

legal protection of intellectual property more and more problematic. The internetbased commerce (e-commerce) is an important element of the new context in which this protection is to be guaranteed.

19.3 Information technology in research practice IT has revolutionised the research methodology and infrastructure of all scientific disciplines. It has enabled more effective implementation of canonical research activities as well as new ways of acting which were unthinkable in the past. IT has, first of all, revolutionised bibliographic and editorial work. Such operations as systematic review of journals, searching for articles covering a selected subject-matter or copying library documents no longer require a personal visit to the library – they can be performed remotely using a personal computer connected to the network, without the use of any writing and copying tools. Interlibrary loans can be completed in a few seconds. Electronic documents, collected by a researcher during his whole professional career, can be stored in a personal database, supported by a suitable reference management software25 enabling him to cite and reference them in his reports and publications, edited using a word processor. The latter, in turn, can be coupled with multi-language dictionaries, with general and profiled encyclopaedias, with editors of mathematical and chemical formulae, as well as with editors of drawings and photographs. Local software and data resources – including libraries, dictionaries, translators and encyclopaedias – can be supported by global resources available through the internet. IT has revolutionised the practice of scientific communication, not only by the digitisation of research documentation, but also by the immediate and simultaneous communication of many researchers located in different corners of the world, enabling the functioning of virtual research teams and virtual conferencing. E-mail and instant messaging systems26 have become basic tools for bi- and multilateral communication, while file transfer systems27 – tools for selective exchange of large volumes of data. To the greatest extent, IT has revolutionised experimental research: computers and computer networks have enabled far-reaching automation of experiments, which is increasing their efficiency and scale. They have enabled implementation

25 For an overview, cf. the Wikipedia article “Comparison of Reference Management Software” available at https://en.wikipedia.org/wiki/Comparison_of_reference_management_software [201805-04]. 26 For an overview, cf. the Wikipedia article “Instant Messaging” available at https://en.wikipedia. org/wiki/Instant_messaging [2018-08-04]. 27 For an overview, cf. the Wikipedia article “Managed File Transfer” available at https://en.wikipe dia.org/wiki/Managed_file_transfer [2018-08-04].

19.3 Information technology in research practice

423

of complex mathematical methods for planning experiments, controlling their course and processing their results. Moreover, they have enabled remote and distributed experimentation with involvement of research infrastructure belonging to various institutions all over the world. All these new technical opportunities, created by IT means, have significantly influenced the style of individual research work and the way research is organised – not only at the institutional level, but also at the national and international levels. It should be noted that not all changes implied by these new opportunities are indisputably positive. An average researcher of today, having easy access to scientific articles, books and reports in a digitised form, collects a much larger volume of documents per year than his homologue 25 years ago, but he spends roughly the same time on reading them28: the information extracted by him from the majority of them is limited to the abstracts or conclusions. On the other hand, an average researcher of today is obliged – to survive in science – to publish more articles per year than his homologue 25 years ago. Using IT means, he is able to produce new documents much faster, but not necessarily in a more careful way if their logical and linguistic aspects are concerned. Using automated experimental setups, he can quickly repeat and execute experiments aimed at acquisition of much larger volumes of data, but he is much less selective and critical when planning those experiments. He can use increasingly sophisticated methods of statistical processing of experimental data, implemented in many commercially available software packages, without deeper understanding of those methods. During scientific conferences, organised in a traditional or virtual form, the researchers establish more numerous professional contacts than in the past, but those contacts are usually more superficial and more oriented of formal cooperation – on effective fundraising rather than on deep substantive cooperation driven by research curiosity. In general, the use of IT means may potentially accelerate various research operations related to information processing. Unfortunately, this technical opportunity is frequently used in a perverse way by the research management, viz. institutions responsible for organisation and financing of research at the national and international levels are inclined to increase their requirements with respect to the documentation prepared by ordinary researchers on various occasions: applications for funding research projects, professional promotions, reporting during and after completion of projects, etc. Consequently, an average researcher of today has to spend much more time on bureaucratic drudgery than his homologue 25 years ago. The above-outlined changes in the style of researchers’ everyday work prompt a conclusion that the side-effects of the dissemination of IT means in research

28 cf. C. Tenopir, R. Mays, L. Wu, “Journal Article Growth and Reading Patterns”, New Review of Information Networking, 2011, Vol. 16, No. 1, pp. 4–22; P. C. Baveye, “Learned Publishing: Who Still Has Time to Read?”, Learned Publishing, 2014, Vol. 27, No. 1, pp. 48–51.

424

19 Ethical issues implied by information technologies

practice require some pragmatic and ethical reflection. The superficiality of many substantive research activities, implied by their acceleration, and the rush resulting from an overgrowth of formal activities – these are probably the most important side-effects which increase the risk of negligence errors in research practice, such as carelessness in experimental operations or the lack of logical precision and excessive redundancy in reporting of research results. Many other side effects, whose occurrence is not limited to research practice, have been already pointed out in Section 19.1 and in Section 19.2; here, some additional issues, being more specific to this practice, are outlined. By the end of the twentieth century, academic institutions lost their monopoly for conducting scientific research and providing tertiary education; as a consequence, they also began to lose their dominant influence on the dissemination of technoscientific knowledge. The pervasive use of IT tools has accelerated this process: internet resources of information (including dictionaries, encyclopaedias and guides) are created by people not necessarily belonging to academic milieus, viz. by non-academic professionals and hobbyists. The question arises, in this situation: what is the nature and scope of academic responsibility for the quality of scientific (or declared as scientific) information available in online resources, i.e. for its veracity, reliability and usability? The most important criteria for assessing the quality of information are those referring to its content and those referring to its source (origin). An expert evaluates a piece of information belonging to his field of expertise using both kinds of criteria, a layman relies mainly on the reputation of the source of information. The credibility of information usually does not depend on its usefulness, but its usefulness depends on its credibility.29 Example 19.8: Wikipedia is the largest and most popular multilingual online encyclopaedia created by internet users according to the principles of the Open Source movement. Since its initiation on January 15, 2001, the number of articles in all language editions has grown to 48 million30. However, not the quantity but the reliability of the articles has been always a priority objective of the Wikipedia coordinators31. When compared to traditional encyclopaedias, Wikipedia does have some weak points, in particular: – there is no guarantee that it contains articles on important subjects; – the articles can be incomplete or unstable in time; – the authors of articles may fail to cite relevant original sources, thus making it hard to determine the credibility of the contents.

29 cf. A. Vedder, “Responsibilities for Information on the Internet”, 2008, p. 353. 30 On May 1, 2018, the greatest number of articles was recorded in the following language versions of Wikipedia: English (5 631 075 articles), Cebuano (5 382 900 articles), Swedish (3 784 424 articles), German (2 177 095 articles) and French (1 979 797 articles); regular updates may be found at https://en.wikipedia.org/wiki/List_of_Wikipedias [2018-05-01]. 31 for more details, including empirical credibility tests, cf. the Wikipedia article “Reliability of Wikipedia” available at https://en.wikipedia.org/wiki/Reliability_of_Wikipedia [2018-05-05].

19.3 Information technology in research practice

425

So, it is generally recommended that Wikipedia be a starting point of search for information rather than the primary source of sought-for information. Despite all that criticism and reservation, it gives invaluable services to the global community of internet users, including professional scientists. It enables them to get, within a few seconds, an answer to a question that arises when they write or edit their own texts or work on other intellectual tasks. Even if the answer is provisional and requiring verification using alternative sources, this is an important advance, especially if these alternative sources are indicated.

Expert knowledge is getting more and more available on the internet: not only on the websites of research institutions and in online encyclopaedias, but also on the private websites offering medical, psychological or managerial advice. The latter are attracting more and more interest among internet users: in the USA and EU countries, the majority of them are inclined to search the internet for the first advice on their health problems. The internet is becoming a major source of advice regarding health diagnostics, treatment options and disease prevention. A recent survey, conducted by the Pew Research Center, shows that more than half of American adults use the internet to find information about their medical condition; moreover, approximately 80% of physicians had patients who gave them printed health information found on the internet32. So, the risk of false self-diagnosis is considerable: a patient might end up believing that he has a life-threatening condition when it is actually harmless, or he might dismiss a condition as non-threatening when it actually deserves urgent medical attention. This medical example is showing enormous responsibility of experts who share their knowledge on the internet. Before engaging in this kind of practice, they should realise that not only false but also true information may be harmful when it is incorrectly interpreted by a layman or used by a terrorist for fulfilling his criminal plans. It should also be noted that technoscientific information, displayed on the internet, may dramatically influence moral decision-making which depends not only on the principles and values, accepted by a decision-maker, but also on his factual knowledge. There is a far-reaching social acquiescence for the internet-based dissemination of business information for marketing and public-relations purposes. This consent is reinforced by false but effective propaganda claiming that commercial advertising is beneficial for society because it is an independent source of income enabling free media to survive. Even if the interests of the media are consistent with the interests of broadly understood society, it is not always the same with the advertisers33. Commercial advertising is especially problematic when it is performed by

32 Kim Ryul, Kim Han‐Joon, Jeon Beomseok, “The Good, the Bad, and the Ugly of Medical Information on the Internet”, Movement Disorders, 2018, Vol. 33, No. 5, pp. 754–757. 33 cf. K. Merten, “Public Relations – die Lizenz zu Täuschen?”, PR Journal, November 2004, http:// www.pr-journal.de/redaktion-archiv/6396-klaus-merten-public-relations-die-lizenz-zu-tchen.html [2010-07-07]; K. Merten, S. Risse, “Ethik der PR: Ethik oder PR für PR”, September 2009, http:// www.pr-journal.de/images/stories/downloads/merten%20ethik.pr.09.03.2009.pdf [2018-07-07].

426

19 Ethical issues implied by information technologies

the media expected to be highly reliable – e.g., by the internet portals of “serious” technoscientific journals or research institution34. The internet has been more and more frequently used for collecting data in sociology, psychology and medical sciences, as well as in economic sciences and various branches of IT. The data come from respondents who are the object of research themselves or intermediaries in obtaining data from other subjects. In principle, the rules of conduct in this case must be the same as those that apply to direct experimentation on humans. There are, however, new morally significant issues related to the research procedure, e.g. the organiser of the internet-based research should make sure that: – the identification of data providers is accurate enough to consider the data they provide sufficiently reliable; – the collected data do not “leak out” and are not used to the detriment of research participants.35 The websites of research institutions, research projects and research teams provide information not only about the subject and methodology of research, but also about research infrastructure and research personnel. Biographical information about the latter is presented more and more often according to marketing schemes, sometimes without the consent of the persons concerned. Closer analysis of such information enables one to identify many morally questionable practices: from a lack of respect for the person36, through verbal manipulations aimed at impressing the reader, to attributing to people the deeds they have never done. Self-promotion today is often openly treated as a professional duty, and the ability of aggressive self-promotion – as a virtue. Just as the quality of the language of emails departs from the standards of traditional epistolography, so the language of many websites, run by scientific institutions, differs negatively from the language of the documents these institutions produced before the internet age. Unfortunately, this observation quite often applies also to scholars’ speeches delivered in the media: they are deficient not only in terms of elementary grammatical and phraseological correctness, but also in terms of honesty and communicativeness.

34 cf. E. H. Spence, “A Universal Model for the Normative Evaluation of Internet Information”, Ethics and Information Technology, 2009, Vol. 11, No. 4, pp. 243–253. 35 S. Flicker, D. Haans, H. Skinner, “Ethical Dilemmas in Research on Internet Communities”, Qualitative Health Research, 2004, Vol. 14, No. 1, pp. 124–134. 36 R. S. Dillon, “Respect for Persons, Identity, and Information Technology”, Ethics and Information Technology, 2009, Vol. 12, No. 1, pp. 17–28.

19.4 Netiquette or internet ethics

427

19.4 Netiquette or internet ethics The internet, like many spectacular inventions of the twentieth century, has its military origin dating back to the 1960s – to the times when the nuclear threat of the world reached its zenith and became the engine of progress in the field of various security-related technologies dedicated to people, material substance and information. If the latter category is concerned, the US Advanced Research Projects Agency (ARPA) developed a system called ARPANET which was the first embodiment of the internet idea. The global system, which we call today internet, was created in the 1980s by a successful synthesis of the American experience related to the ARPANET system and the European experience related to the CERNET system, developed in the laboratories of CERN. The internet is a decentralised system of interconnected computer networks37 – the system which is immediate, direct and infinitely expandable in terms of both contents and reach. It is also interactive and “egalitarian” in the sense that each holder of the appropriate equipment and having certain technical skills can use it in a passive way (i.e. by downloading information from it) or in an active way (i.e. by uploading information to it). The internet has enhanced communication skills of individuals and social groups having computers connected to the network or at least having access to such computers. This means that today the wealthy countries and wealthy social groups benefit at most from the economic and cultural advantages provided by the internet; it seems, however, that less prosperous countries, if there are no political obstacles, may benefit even more. The internet can serve people in the responsible use of freedom in various spheres of life (including the political sphere); it can facilitate their education and cultural development; it can be conducive to overcoming the isolation of individuals and social divides38. Paradoxically, however, it can also foster development of egocentrism and alienation39; it can be used for committing the most serious crimes. So, like any other tool, it can serve for both good and bad purposes. The internet has changed the nature of work in many professions, including the nature of research work; it has created new opportunities for education and self-education; it has simplified many everyday operations, such as completing official procedures, performing banking operations, booking places in restaurants or hotels, booking airline or railway tickets, buying tickets for artistic events, buying

37 Despite the compelling semantic suggestion, the internet should not be identified with World Wide Web (WWW) being one of the internet applications; cf. the Webopedia article “The Difference Between the Internet and World Wide Web” available at https://www.webopedia.com/DidYou Know/Internet/Web_vs_Internet.asp [2018-05-08]. 38 Conceil Pontifical pour les Comminications Sociales, Ethique en internet, Libreria Editrice Vaticana, Città del Vaticano 2002. 39 ibid., p. 16.

428

19 Ethical issues implied by information technologies

books, CDs and other products or even consulting specialists. The internet has significantly influenced the style of social life; it has broadened the spectrum of available forms of entertainment and the functionality of social media. Unfortunately, the latter developments have implied not only advantageous consequences, they have increased the vulnerability of journalists and other people responsible for those media to ideological and economic pressures. The internet enables a very quick delivery of information to its intended recipient, and therefore – in the age of omnipresence of economic competition – it is increasingly contributing to the pursuit of sensation and gossip, as well as to mixing true information with fake news, advertisements and cheesy entertainment. These are only some of the moral issues the internet ethics is supposed to deal with.40 Netiquette41 is a set of rules for using the internet, including elements of etiquette, ethics and law. Although the term was used already in the 1980s, the rules it refers to were first catalogued in the 1990s by Arlena H. Rinaldi in 199242, and by the Internet Engineering Task Force in 199543. The philosophical foundations of netiquette may be traced back to the communication ethics of Jürgen Habermas44.

19.4.1 Netiquette rules Netiquette rules are based on the following principle: each user is responsible for his actions on the internet whose functioning (like functioning of science) is based on trust. Here are examples of obvious abuses: placing illegal information on the net; sending messages that may entail the destruction of the recipients’ work outcomes; distributing so-called chain letters; or using obscene words and phrases in sent messages. Rich collections of detailed rules of netiquette, which appear in the relevant handbooks45 and internet documents46, can be classified according to the following general recommendations:

40 ibid., p. 21. 41 derived from the English noun net combined with the French noun etiquette (which means “little ethics”). 42 A. H. Rinaldi, “The Net User Guidelines and Netiquette”, September 3, 1992, http://www.shen tel.net/general/tiquette.html [2018-05-12]. 43 S. Hambridge, “Netiquette Guidelines of IETF Network Working Group”, October 1995, https:// www.rfc-editor.org/info/rfc1855 [2018-05-12]. 44 e.g. B. C. Stahl, Information Systems – Critical Perspectives, Routledge, London – New York 2008, Chapter 2. 45 e.g. K. Furgang, Netiquette: A Student’s Guide to Digital Etiquette, Rosen Pub., New York 2018. 46 e.g. V. Shea, Netiquette, Albion Books 2004, http://www.albion.com/netiquette/book/index. html [2018-07-07].

19.4 Netiquette or internet ethics

429

NGR1: respect copyright, NGR2: protect privacy and personal data, NGR3: protect secrecy of electronic correspondence, NGR4: do not pollute the infosphere with unnecessary information, NGR5: take care of net and equipment security, NGR6: follow technical guidelines, NGR7: follow editorial guidelines, NGR8: be polite and tolerant. All of these recommendations have a significant ethical dimension; the recommendations NGR1–NGR3 are, moreover, supported by law. The ethical issues related to the NGR1 recommendation are covered in Section 18.2 devoted to copyright. It should be noted that – despite widespread opinions – they apply also to electronic letters. So, when forwarding a received e-mail message, we are obliged to mention the name of its author and refrain from modifying its content. Certain legal risk is also associated with the uncertainty about the status of a work disseminated on the internet: it is always good to thoroughly check (although it can be cumbersome or almost impossible) whether copyright is respected and whether the work is shared free of charge or against payment. If there are any doubts in this regard, it is better to look for an alternative offer. The legal protection of privacy and personal data, referred to in the NGR2 recommendation, is regulated in various ways in most countries of the world; in the EU countries, e.g., by the local acts based on the relevant EU regulation47. The NGR2 recommendation applies to all parties, directly or indirectly, involved in internet operations. It is not only about respecting confidentiality of private information received, but also about cautious using and sharing our own and other person’s email addresses. Private correspondence should not be distributed using third-party mailing lists without their consent; distribution lists compiled for official purposes should not be used for private purposes. Private information should not be stored in a user’s directory on the e-mail server because this directory may be easily accessed by unauthorised persons (e.g. server administrators). Far-reaching moderation and caution must be preserved when writing messages about third parties. The secrecy of correspondence is a fundamental legal principle supported by the constitutions, as it is a case in the majority of European countries, or by lowerlevel legal acts, like in the USA where it is derived through litigation from the Fourth Amendment to the US Constitution. It guarantees that the content of sealed

47 “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data”, Official Journal of the European Union, 2016, No. L119, pp. 1–88.

430

19 Ethical issues implied by information technologies

letters is never revealed and letters in transit are not opened by government officials or any other third party. The recommendation of the NGR3 netiquette complements those provisions with a catalogue of good practices that reduce the risk of disclosing someone else’s messages. This risk is much higher than in the case of traditional correspondence because of the new facilities, such as tools for exchanging messages within evolving groups of people, storing the entire history of correspondence and forwarding received messages to multiple addressees. These are great inventions enhancing electronic correspondence, but at the same time engendering additional risk of secrecy violation. For this reason, before sending a received private letter to another person, we should ask its author for permission. This does not apply to official letters whose circulation is regulated by office pragmatics, specific to each employer. The easiness of exchanging messages via e-mail makes us receive, every day, numerous unwanted messages, called spam48. The netiquette recommendation NGR4 is to discourage us from contributing to spam circulation – in particular, from using the mailing lists thoughtlessly, from participating in chain correspondence49, from forwarding various commercial offers, pseudo-jokes, alerts and fake news, etc. On the other hand, the recommendation NGR4 is to encourage us to write letters in a concise and topic-focused way, and also to remove from the quotations everything that is not directly related to the topic. For the sake of the purity of the infosphere, it is good to follow the principle “one letter – one topic”, and to provide each letter with an informative “subject” well characterising the topic. Although this practice results in an increased number of letters, but at the same time it reduces their total volume when multiple letters on the same subject are exchanged; moreover, this practice also makes it easier for the sender and recipient to search for letters in their correspondence archives. The netiquette recommendation NGR5 is about the necessity to ensure network and hardware security through daily systematic checking, by means of antivirus software, of all new files saved in the computer memory – both files received via the internet files copied from external media such as CDs or pen drives. The recommendation NGR5 is to dissuade us from exchanging files with other users of computers and networks if we have a reasonable suspicion that our computer has been “infected” by a virus which – for some technical reasons – has not been detected by the installed antivirus software.

48 The term spam is derived from a 1970 sketch of the BBC television comedy series Monty Python’s Flying Circus; the sketch is set in a cafe where nearly every item on the menu includes Spam canned luncheon meat; for more details cf. the Wikipedia article “Spamming” available at https:// en.wikipedia.org/wiki/Spamming#Etymology [2018-05-12]. 49 cf. the Wikipedia article “Chain Letter” available at https://en.wikipedia.org/wiki/Chain_letter [2018-05-13].

19.4 Netiquette or internet ethics

431

The technical guidelines, mentioned in the recommendation NGR6, refer primarily to the settings of the e-mail program and to the way of managing the user’s directory on the e-mail server. Those settings should be consistent with the administrator’s instructions, and the volume of the directory should be maintained at the minimum necessary level. The mail should be checked regularly, and the received letters should be either deleted or copied from the server to the user’s computer. The long-term storing of letters on the e-mail server is not only inconsistent with the recommendation NGR2, but also asocial because it is blocking disk space for other users. In the early documents devoted to netiquette, the recommendation NGR7 referred primarily to various conventions concerning the structure of e-mailed texts and the use of special characters, in particular – uppercase letters and emoticons. Today, the emphasis is put on the quality of the language used – its orthographic, grammatical, stylistic and semantic correctness – to compensate for its degradation which occurred during the first decades of the intensive exploitation of the internet. The linguistic correctness, including abstention form the use of vulgar words and phrases, is also an element of net courtesy referred to by the recommendation NGR8. A signature block50, appended to an electronic letter, is another element of net courtesy and a proof of responsibility for the content and form of this letter. Tolerance, which is also addressed in the recommendation NGR8, is important for the peaceful and productive functioning of the internet connecting people of various cultures all over the globe. It is very much needed for positive interpretation of the controversial statements of others and for being careful about the quality of our own messages. The practice of tolerance enables us, in particular, to refrain from textual and graphical statements which on the grounds of another culture, another mentality, and especially another religion – from the perspective of another value system – could be interpreted as an offence, provocation or just hostility. The use of the academic network for commercial purposes is inconsistent with many netiquette rules. At the same time, an absolute prohibition on employees to use their office e-mail address for private purposes, frequently enforced by employers, is debatable.

19.4.2 Netiquette versus ethics of journalism Many creative activities, performed daily on the internet by millions of its users, are of journalistic nature. These are not only article-type contributions made to online magazines and online encyclopaedias, but also posts displayed on discussion fora and entries to personal online diaries called blogs. It seems, therefore, that codes of

50 often abbreviated to “signature”, “sig block”, “sig file” or just “sig”.

432

19 Ethical issues implied by information technologies

journalistic ethics can be at least a source of inspiration for the netiquette. The opinions in this respect are, however, diversified: – the minimalists believe that the professional ethics of journalism is simply sufficient, – the progressivists claim that it should be extended with some specific rules concerning new IT-implied issues, – the maximalists maintain that it can at most be a starting point for the development of a qualitatively new internet ethics. In any case, however, the principles of journalistic ethics apply to the para-, quasiand pseudo-journalistic activities of internet users; hence the motivation for presentation of those principles in this chapter. The following are the most important international documents on journalistic ethics: – Declaration of principles on the conduct of journalists, adopted by the International Federation of Journalists (1954, 1989)51; – International principles of professional ethics in journalism, developed under auspices of UNESCO (1983)52; – the resolution Ethics of journalism, promulgated by the Parliamentary Assembly & Council of Europe (1993)53. The above documents have significantly influenced the national codes of journalistic ethics which may be found on the internet54. The following three principles constitute the core of journalistic ethics: – The main task of journalists is to provide reliable and impartial information. – Freedom of speech and expression must be accompanied by the journalists’ responsibility for their publications in the press, radio, television and on the internet. – The good of readers, listeners and viewers – or the public good in general – should take precedence over the interests of the author, editor, publisher or sender. The activity of journalists should be oriented towards supporting the good of the human person and the public good, in particular – the legal order. In the language of

51 Declaration of Principles on the Conduct of Journalists, adopted by the 1954 World Congress of the International Federation of Journalists; amended by the 1986 World Congress, http://www.ifj. org/about-ifj/ifj-code-of-principles/ [2018-05-13]. 52 International Principles of Professional Ethics in Journalism, UNESCO, 1983, http://ethicnet.uta. fi/international/international_principles_of_professional_ethics_in_journalism [2018-05-13]. 53 Ethics of journalism, Parliamentary Assembly & Council of Europe, Resolution #1003 (1993), http://assembly.coe.int/nw/xml/XRef/Xref-XML2HTML-en.asp?fileid=16414 [2018-05-13]. 54 e.g. on the website EthicNet – Journalism ethics located at http://ethicnet.uta.fi/codes_by_coun try [2018-05-13].

19.4 Netiquette or internet ethics

433

deontological ethics, one can say that the journalists’ moral duty is to provide information that is credible (preferably true), reliable (full, clear and accurate) and objective (factual, impartial and independent) – using aesthetic forms of expression symbolising the moral good, in particular – the respect for the recipients of information. Any manipulation of information, aimed at disinformation of its recipient, is therefore a grave violation of journalistic ethics. The manipulation of information consists in its conscious distortion intended to influence the recipient’s consciousness and decision-making capabilities, in particular – by providing falsified or fabricated messages, unimportant or insignificant messages or ambiguous or redundant messages. In the age of mass media, more sophisticated methods of manipulation are also used by journalists, such as: – attracting the recipient’s attention by means of stimuli appealing to the instincts (sex, violence); – influencing the recipient’s subconsciousness, most often by means of images, in order to create a state of tolerance for false messages; – impressing the recipient with scientific or pseudo-scientific data (e.g. with numerical results of public opinion polls); – mixing important with entertaining messages.55 A more systematic and more exhaustive overview of manipulation techniques one may find in Wikipedia by starting from a general article “Media Manipulation”56 directing its readers to more detailed and more specific articles. The ethical expectations towards journalists can be expressed in terms of virtues which are conducive to the realisation of their moral obligations; five of them – prudence, justice, courage, humility and moderation – seem to be of special importance. Their role in performing the journalistic profession, in a manner consistent with journalistic ethics, may be outlined as follows: – Prudence is indispensable for correct identification of problems and making the right decisions in exceptional situations, as well as for drawing rational conclusions from the past experience, fast understanding of current issues and accurate prediction of human behaviour and consequences of decisions made. – Justice is necessary to maintain objectivity and impartiality when reporting about people and facts, to respect the personal goods and rights of every human being, and to engage in defending people against lies and injustice. – Courage is a sine qua non condition of being able to fight for truth and justice – against lies, injustice and dishonesty; to resist blackmail, corruption proposals

55 cf. W. Chudy, “Mass media w perspektywie prawdomówności i kłamstwa”, [in] Społeczeństwo informatyczne: Szansa czy zagrożenie? (Ed. B. Chyrowicz), Wyd. Towarzystwo Naukowe KUL, Lublin 2003, pp. 71–101. 56 available at https://en.wikipedia.org/wiki/Media_manipulation [2018-05-14].

434

19 Ethical issues implied by information technologies

and other external pressures; to be ready to consciously sacrifice own security, material benefits and prestige when facing an ethical dilemma. – Humility is indispensable for consciously serving the needs of community; for a realistic assessment of own competences and limitations; for being able to refrain from humiliating others (or exalting themselves) in critical situations, and to deliberately resist the temptation of “self-creation”. – Moderation is necessary for being able to rationally balance internal motivations (of intellectual, volitional and emotional nature) and external driving forces (superiors’ orders, social messages, social pressures, etc.). The diversity of requirements imposed on journalists is a source of inevitable conflicts of values. On the one hand, people (citizens) have the right to inform and to be informed – confirmed by such international normative documents as Universal Declaration of Human Rights (1948)57 and The International Covenant on Civil and Political Rights (1966)58. On the other hand, however, this right is subject to constraints resulting from various prohibitions regarding interference in private life: infringement of physical, intellectual or mental integrity and freedom, violation of personal goods, harmful interpretation of words and deeds, disclosure of troublesome facts from personal life, spying and stalking, interception of correspondence, insidious use of written or oral statements and disclosure of information obtained while performing professional duties. On top of that, a journalist is bound by legal regulations concerning the protection of sources (or the confidentiality of sources). Those regulations prohibit authorities, including the courts, from compelling a journalist to reveal the identity of an anonymous source of information, but – at the same time – they forbid him to disclose: – data enabling the identification of a person who provided information under the confidentiality condition, – information received from an informer who provided it under the confidentiality condition, – information whose disclosure could expose third parties to danger, – data enabling the identification of persons involved in legal proceedings (regardless of their role in the proceedings). The confidentiality obligation may be waived only because of an important state or national matter, a necessity to prevent a harm of third parties, a free consent of all interested parties, or the public status of persons involved.

57 Universal Declaration of Human Rights (Resolution #217), United Nations General Assembly, 2015 (adopted in 1948), http://www.un.org/en/udhrbook/pdf/udhr_booklet_en_web.pdf [2018-05-14]. 58 The International Covenant on Civil and Political Rights (Resolution #2200A), United Nations General Assembly, 1976 (adopted in 1966), https://treaties.un.org/doc/publication/unts/volume% 20999/volume-999-i-14668-english.pdf [2018-05-14].

20 Concluding remarks This chapter is devoted to some complementary remarks concerning methodology and ethics of technoscientific research: Section 20.1 contains an outline of the tendencies identified in this area, and Section 20.2 – of the selected forms of education aimed at raising social awareness of its importance.

20.1 Evolution of research methodology and research ethics The integration of science and technology – as forecasted by the twentieth-century European visionaries: Günther Anders (1902–1992), Hans Jonas, Martin Heidegger (1889–1976), Jacques Ellul (1912–1994) and Eduardo Nicol (1907–1990) – is progressing. A global system of technoscience is maturing, the differentiation of basic and applied research is getting more and more problematic, as well as the attribution of the responsibility for research results which are almost immediately implemented in industrial products. Therefore, the methodology of technoscientific research should take into account – to a significantly greater extent than in the twentieth century – the elements of engineering art, and the ethics of technoscientific research has to be more closely integrated with engineering ethics; moreover, both the methodology of technoscientific research and research ethics should more profusely draw on philosophy of science. Example 20.1: An excellent illustration of the profound and intricate interdependence of research ethics and methodology is provided in the 2016 paper “The natural selection of bad science”1. Its authors corroborate, with both theoretical considerations and numerous practical examples, the observation that “poor research design and data analysis encourage false-positive findings”, and that the perennial use of poor research methods is the result of “something more than just misunderstanding”, viz. of perverse incentives in the bureaucratic system of science, which favour those methods, leading to the natural selection of bad science. Since publishing is a sine qua non condition of career advancement, some research methods have almost certainly been selected to multiply publications rather than to make discoveries.

Already in the middle of the twentieth century, the above-mentioned forerunners of thinking about the future of the world, warned society against the threats posed by unwary implementation of new technologies and pointed out ethical problems resulting from the imbalance implied by the growing power of technology and weakening of human responsibility2. Even after great ecological catastrophes caused by reckless industrialisation, the global civilisation is predominantly shaped by the

1 P. E. Smaldino, R. McElreath, “The Natural Selection of Bad Science”, September 21, 2016. 2 J. E. Linares, Etica y mundo technológico, Fondo de Cultura Económica, Mexico 2008, p. 417. https://doi.org/10.1515/9783110584066-020

436

20 Concluding remarks

advocates of axiological neutrality of technology who claim that the latter provides only tools for solving practical problems, and those tools themselves are neither good nor bad since they can be used for both good and bad purposes, and the latter are neither selected nor defined by experts of technology, but by politicians. According to the advocates of the axiological neutrality of technology, ethical evaluation of new technologies should be based on pragmatic cost-benefit analysis and short-term economic effects rather than on its social costs, environmental impact and long-term economic effects3. Fortunately, for several decades – slowly but systematically – the hierarchy of the criteria for evaluation of new technologies has been changing: not only local but also global population is perceived as a stakeholder, not only its present but also future generations are taken into account in the cost-benefit analysis; both short-term and long-term risks, associated with dissemination of new technologies, are scrutinised with more care and accuracy. There is also a growing understanding of an urgent need for methodological and ethical reflection over technoscience, aimed at finding a justified balance between the good of the individual and the good of the community, as well as between immediate and delayed benefits. When speaking about this positive tendency, one must clearly contrast it with movements of political correctness which by their campaigns rather compromise this tendency than strengthen it. Already in the 1980s, Hans Jonas warned that, for the first time in the history of mankind, we had reached such a state of technological advancement that our survival might be problematic4. According to Jorge E. Linares, the author of the book Etica y mundo technológico, we need a global agreement on the validity of the following safety principles inspired by the traditional values of bioethics: autonomy, justice, beneficence and non-maleficence. These are: the principle of responsibility, the principle of prudence, the principle of autonomy and informed consent, and the principle of distributive and retributive justice5. The principle of responsibility is about individual and collective responsibility for all mankind of today and tomorrow, and for the whole world of nature, especially – for appropriate protective and preventive measures. The principle of prudence is about the necessary precautions which should accompany the introduction of new technologies and new products of these technologies, as well as about refraining from introducing them when the risk of negative social or ecological consequences, especially large-scale consequences, is too high. The principle of autonomy and informed consent implies a requirement that any technical project be undertaken with the conscious consent of individuals or communities which are its potential beneficiaries or victims, and that this consent be based on

3 cf. ibid., pp. 411–415. 4 ibid., p. 495. 5 ibid., p. 437.

20.1 Evolution of research methodology and research ethics

437

reliable information (scientific knowledge, credible political or administrative declarations), not on coercion or manipulation. According to the principle of justice, new technologies and their products should not only contribute to the well-being of society, but also promote the equitable distribution of this well-being in society and foster freedom and other human rights; on the one hand, they should reduce existing inequalities (harms and damages), on the other – prevent the emergence of new ones.6 When speaking of the principle of prudence, it is worth recalling that rationally motivated attempts to refrain from using certain techniques have a long history. Already in the twelfth century, Pope Innocent II forbade the use of a crossbow in the fighting between Christians, under the threat of an anathema; three centuries later, Leonardo da Vinci decided not to reveal his work on the submarine project, claiming that such a satanic means cannot be responsibly given to people who are not mature enough7. Advocates of technocratic approach to the problems of our civilisation argue that blocking the dissemination of new technologies is not feasible in the global world, and therefore the only remedy for the threats they bring is generation of new technoscientific ideas. Taking into account that the main problem is the immanent ambivalence and the large scale of potential effects of new technology, one cannot ignore or reject this argumentation, although the proposed “remedy” is certainly not sufficient. Example 20.2: Nano-assemblers are micromachines capable of building, modifying or destroying physical and biological objects. On the one hand, they can be used for manufacturing miniature technical objects from secondary raw materials, for producing cheap medicines, for direct microsurgical treatment as well as for removing toxins from the natural environment and using them in manufacturing of useful chemicals. On the other hand, however, they can also be used for producing extremely dangerous weapons of mass destruction and for large-scale surveillance of civilians.

There is a striking contrast between an increasing obsession of security in everyday (private) life8, and the reckless impetus of the mechanisms of competition in technoscience which force researchers to accelerate the transfer of new achievements to the sphere of socio-economic practice. Those mechanisms push researchers towards unethical behaviours which consist in releasing premature, ill-conceived or unconfirmed results whose consequences may be very dangerous, not only if these are findings of pharmacology or genetics, but also if these are new algorithms of

6 cf. ibid., pp. 445–478. 7 N. Postman, Technopoly: The Surrender of Culture to Technology, Vintage Books, New York 1993, pp. 23–26. 8 F. Furedi, Culture of Fear Revisited, Continuum Pub., London 2006 (first published in 1997); G. R. Skoll, Globalization of American Fear Culture, Palgrave Macmillan, London 2016.

438

20 Concluding remarks

artificial intelligence. Instead of neutralising causes of those behaviours, we persistently try to counteract them by codifying good research practices or promulgating stricter legal regulations. Neither of those means can be effective in the long run if the real goal is moral progress and survival of humanity. The realistic hope is in the proliferation of the ideas of the Slow Science movement, which originate in a conviction that scientific research should be a steady, systematic and methodical process rather than a panoply of ad hoc actions aimed at finding “quick fixes” to most urgent problems of society. This movement supports curiosity-driven research and opposes to the quantophrenia in evaluation of research performance. We can read in the manifesto of this movement: “We do need time to think. We do need time to digest. We do need time to misunderstand each other, especially when fostering lost dialogue between humanities and natural sciences”9. Paradoxically, among the godfathers of the Slow Science movement is the founder of bibliometrics and scientometrics, Eugene E. Garfield (1925–2017), who published in 1990 the seminal paper “Fast Science vs. Slow Science. . .”10. However, the mature multi-aspectual argumentation, supporting the programme of this movement, may be found only in the 2013 book Une autre science est possible! Manifeste pour un ralentissement des sciences11, authored by the Belgian philosopher of science Isabelle Stengers (*1949). The programme of the Slow Science movement has primarily the methodological dimension, but it has also a tremendous moral impact because “slowing down” in research means more insight, more accuracy and more responsibility. The technoscientific community is in urgent need for enhanced moral awareness to meet the challenge of the growing responsibility for possible practical consequences of research outcomes applied in practice. This responsibility is not only potential but also actual when representatives of that community are involved (as scientific experts) in decision-making processes in socio-economic institutions at the local, national or international level. When acting in an extra-scientific socioeconomic context, they are under temptation to express their opinions on issues beyond the field of their expertise. This is getting morally problematic if they support, explicitly or implicitly, those opinions with their scientific authority, e.g. by using their scientific degrees and title.

9 Slow science manifesto, Slow Science Academy, Berlin, Germany 2010, http://slow-science.org/ [2018-05-20]. 10 E. Garfield, “Fast Science vs. Slow Science, or Slow and Steady Wins the Race”, The Scientist, 1990, Vol. 18, No. 4, pp. 380–381. 11 the English translation by S. Muecke: I. Stengers, Another Science is Possible: A Manifesto for Slow Science, Polity Press, Cambridge – Medford (UK) 2018.

20.2 Education in research methodology and research ethics

439

20.2 Education in research methodology and research ethics The evolution of research methodology and research ethics in the nearest future will depend not only on the evolution of technoscience itself but also on the development of various forms of education offered to graduate students and other young researchers. These might be not only lectures, seminars, workshops and conference sessions but also libraries of documents (codes, guides and textbooks) on research methodology and research ethics, more and more often available on the internet. Over centuries, the development of science was a product of purely intellectual competition among researchers. During the twentieth century, as discussed in the previous chapters, a significant change occurred: following the institutionalisation and massification of science, as well as its progressing symbiosis with industry and business, the intellectual competition started to be replaced with a struggle for limited resources (jobs, funds, access to infrastructure). This change has become an important factor contributing to the moral degradation of scientific communities. Some attempts to counteract this degradation were made already in the 1970s – first in the USA and then in European countries. They consisted mainly in introducing subjects related to ethics into academic curricula and in propagation of codes of professional ethics, such as codes of good practice in science, codes of engineering ethics, codes of medical ethics, etc. When browsing through the websites of American universities, it is hard to find one that would not offer its students some forms of classes on research ethics. Many of them, especially those involved in research and education in the field of life science or biotechnology, have separate organisational units dealing with ethical issues. Example 20.3: Stanford University12, one of the best American institutions of higher education, has three centres (Stanford Center for Biomedical Ethics, Stanford Decisions and Ethics Center and McCoy Family Center for Ethics in Society) dealing with various aspects of research ethics and offering related classes to students of the whole university. Through the portals of these centres, one can access rich collections of documents devoted to this subject.

In the curricula offered by European universities, the courses related to research methodology and research ethics appear quite seldom, most often in the programmes of medical studies at some British institutions of higher education. Despite bureaucratic pressures, scientific institutions are still trying to function according to the standards worked out by science itself. In order to preserve the capacity of self-regulation, the scientific community needs to quickly and effectively react to violations of those standards. Collections of methodological and ethical recommendations, called codes of research ethics or codes of good academic practice, 12 whose website is located at https://www.stanford.edu/ [2018-05-18].

440

20 Concluding remarks

are adopted by many academic institutions, research non-academic institutions and professional associations to help in this respect. Example 20.4: The 2006 UNESCO report “Interim analysis of codes of conduct and codes of ethics”13 contains a comparison of 65 code-type documents. Each of them was issued by an entity, active in technoscience, with the intention to regulate (also inspire or educate) the behaviour of its own members or addressing scientists in general. Each of them, moreover, is addressing a specific group of professionals associated with a scientific discipline or field of study. Each of them has a normative content (i.e. provides principles, values, norms and rules of conduct). Online collections of academic and professional codes may be found on the websites of numerous institutions of higher education all over the world14.

Typical codes of conduct – adopted recently by research institutions on a wave of political correctness – are usually compilations of wishful-thinking-type declarations and trivial statements imitating legal acts. Some of those codes have been introduced in order to prevent administrative regulation of certain aspects of professional activities by political authorities. On the whole, they cannot play any role in the cases of documented violations of the principles of the functioning of scientific institutions, but they are sometimes used as an excuse for the lack of deeper methodological or ethical reflection in making decisions. The proliferation of “codification” initiatives, observed in recent years, is often a result of a misunderstanding of fundamental differences between ethics and law: – Legal regulations are promulgated and enforced by state authorities, must be strictly codified, refer to precisely defined situations and do not necessarily concern moral issues. – Ethical recommendations refer to morally significant issues, are accepted by individuals or social groups as a result of their free choice, and cannot be strictly codified because they refer to both typical situations and new situations of moral significance. Various instances of institutional abuse of the codes of conduct, especially codes of professional ethics, provoke in society a growing scepticism around code-type normative documents. Many institutions, including academic institutions, use such codes as a kind of decoration whose only role is to increase their social credibility and prestige. This practice may be viewed as a reproachable form of manipulation performed on the public image of an institution, but it is much less harmful than the use of codes for harassment of employees of an institution (or members of an

13 Interim analysis of codes of conduct and codes of ethics, UNESCO Division of Ethics of Science and Technology, 2006, http://unesdoc.unesco.org/images/0014/001473/147335e.pdf [2018-05-18]. 14 e.g. on the website of the Illinois Institute of Technology, located at the address http://ethics.iit. edu/cseplibrary [2018-05-18].

20.2 Education in research methodology and research ethics

441

association) who make honest attempts to correct or improve the functioning of this institution (or association). Example 20.5: Two engineers who attempted to explain the bribery affair on the construction of a dam near Los Angeles were expelled from the American Society of Civil Engineers for violating the code of this society, which prohibited criticising other engineers15.

The generality (and often vagueness) of typical provisions, which appear in the codes of conduct, makes them unsuitable for the direct resolution of specific individual dilemmas. Important difficulties are related, in particular, to provisions referring to conflicting values, e.g. provisions regarding the liability to an institution and the liability to society, or provisions regarding the obligation of truth telling and the obligation of confidentiality. Because of the nature of dilemmas (cf. Section 14.1), they cannot be resolved by logical reasoning only, without additional premises: arbitrary assumption, emotional preferences, glimpses of intuition, etc. A code of conduct, especially if it is expected to be concise, cannot contain sufficient guidelines as to which value in which situation is prioritised. So, in spite of a today predominant conviction that such codes are indispensable, the counterarguments, formulated by the Polish philosopher Leszek Kołakowski (1927–2009) more than half a century ago, do not lose their validity. Referring mainly to the codes of ethics, he argued that a code may create a false impression or even conviction that we have a ready-for-use formula for moral life, which on the one hand satisfies our need for security, but on the other – limits our freedom and, at the same time, our responsibility16. Another problem is related to the existence of different codes for various institutions and professional associations, giving an impression that moral principles are more dependent on the field of professional expertise than they really are17. This book, despite its considerable volume, is not a compendium of solved problems or cases studies. It is intended to encourage methodological and ethical reflection of its readers over – sometimes trivial, sometimes very serious – problems, especially those of dilemmatic nature, arising in everyday research practice. So, it is providing a fishing rod rather than a fish to be grilled for dinner. It is opening various discussions rather than closing them with undisputable conclusions. On the one hand, it is stressing the importance of methodological and ethical standards and norms, on the other – indicating their weak points and provoking criticism with respect to them. It is combining factual knowledge concerning history of science, philosophy of science and moral philosophy with the author’s beliefs formed

15 M. W. Martin, R. Schinzinger, Introduction to Engineering Ethics, 2010 (2nd edition), p. 42. 16 L. Kołakowski, “Etyka bez kodeksu”, Twórczość, 1962, Vol. 18, No. 7, pp. 64–86. 17 M. W. Martin, R. Schinzinger, Introduction to Engineering Ethics, 2010 (2nd edition), p. 43.

442

20 Concluding remarks

during 50 years of research experience accompanied by meta-scientific reflection. Taking into account all those circumstances, it seems to be appropriate to end this book with two pieces of advice for the reader: – The first of them is of ethical nature, and it may be concisely expressed by the following quotation from the Bible: “Stop judging and you will not be judged. Stop condemning and you will not be condemned. Forgive and you will be forgiven”18. If not under pressure of decision-making duties, we should be reluctant to condemn other researchers for their failure to follow the methodological or ethical norms of research, first of all, because – even if we have considerable methodological or ethical knowledge – we almost never know all the premises and motivations of their decisions and behaviours. – The second advice refers to the so-called Dunning-Kruger effect19 which is a psychological state of individuals having little knowledge and, therefore, experiencing illusory superiority because they mistakenly assess their cognitive ability as greater than it is, and – consequently – are inclined to demonstrate and use this knowledge excessively. They lose, however, this inclination when increasing and deepening their knowledge. Being aware of this effect, it seems to be reasonable to abstain from instructing others until reaching the second stage of the self-awareness of methodological and ethical competence when deepened knowledge and understanding is motivating us again to be active in this respect. This book is intended to provide philosophical and pragmatic advice for those researchers who are going to face challenging problems of methodological or ethical nature, and thus focused on various negative aspects and weak points of today’s research practice in technoscience. It cannot provide, therefore, an objective and balanced picture of technoscientific research community and technoscience in society. It would require a second volume to explain how – despite all its uncertainty – scientific knowledge has contributed to the multifaceted development of the Western and global civilisation, including (sic!) moral progress of humanity. An excellent documentation of the letter statement may be found in the recent book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress20 authored by the Canadian-American cognitive psychologist and linguist Steven A. Pinker (*1954).

18 New Testament, Luke 6, 37. 19 A simplified interpretation of this effect is disseminated in the form of the graph called “mount stupid”. 20 S. Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, Viking Pub., New York 2018.

Appendix: Milestones in the history of science Table A.1: Milestones in the history of astronomy.  BC

Pythagoras of Samos (Greece) discovers that the morning star and the evening star are the same.

– BC

Hipparchus (Alexandria) invents a system for classifying the brightness of stars, prepares the first sky map and calculates the distance to Moon.



Claudius Ptolemaeus (Alexandria) constructs a model of a geocentric solar system that enables the prediction of the movements of the planets.



Nicolaus Copernicus (Poland) publishes the heliocentric theory in De revolutionibus orbium coelestium.



Tyge O. Brahe (Denmark) records the first European observation of a supernova.



Hans Lippershey and Zacharias Janssen (Netherlands) independently invent a simple telescope.



Galileo Galilei (Italy) discovers four moons of Jupiter.



Isaac Newton (England) invents the first working reflecting telescope.



Edmond Halley (England), in A Synopsis of the Astronomy of Comet, publishes the calculation of the orbits of comets and the first prediction of a comet’s return.



Immanuel Kant (Prussia), in Allgemeine Naturgeschichte und Theorie des Himmels, hypothesises that the solar system is part of a collection of stars.



Mikhail V. Lomonosov (Russia) infers the existence of atmosphere of Venus.



F. William Herschel (England) discovers Uranus.



F. William Herschel (England), in On the Construction of the Heavens, publishes the first quantitative analysis of the Milky Way’s shape.



Jean-Baptiste Biot (France) finds empirical verification of meteorites as extra-terrestrial objects.



Joseph von Fraunhofer (Germany) discovers the difference between the spectrum of light reflected from the planets and the light from stars, and invents astronomical spectroscopy.



Gustav R. Kirchhoff and Robert W. E. Bunsen (Germany) conduct the first analysis of the chemical composition of stars.



Harlow Shapley (USA) determines the centre of the galaxy and its size.



Edwin P. Hubble (USA) discovers various forms of galaxies (spiral, elliptical and irregular). (continued )

https://doi.org/10.1515/9783110584066-021

444

Appendix: Milestones in the history of science

Table A.1 (continued )



Georges H. J. É Lemaître notes that the universe is expanding, and therefore could be traced back in time to an originating single point.



Clyde W. Tombaugh (USA) discovers the ninth planet of the solar system – Pluto.



Karl G. Jansky (USA) detects radio waves from space.



Fritz Zwicky (Switzerland) and Walter Baade (USA) discover the difference between novae and supernovae.



Grote Reber (USA) invents the radio telescope.



Hans A. Bethe and Carl F. von Weizsäcker (Germany) indicate nuclear fusion as the source of stars’ energy.



George Gamow and Ralph Asher (USA) develop the Big Bang theory.



Fred L. Whipple (USA) discovers the “dirty snowball” composition of comets.



Jan H. Oort (Netherlands) puts forward a hypothesis that the comets are snowballs coming from giant clouds outside of the solar system.



S. Jocelyn Bell (England) discovers pulsars, remnants of stars that exploded and now send radio signals.



David S. McKay (USA) claims that the meteorite ALH contains evidence of traces of life from Mars.

–

Michael E. Brown and collaborators (USA) discover several trans-Neptunian objects, called dwarf planets, in particular, Eris which is more massive than Pluto, leading directly to Pluto’s demotion from planet status.



Suvi Gezari with collaborators (USA) report on the first visual proof of existence of a supermassive black hole . million light years away from Earth.

Table A.2: Milestones in the history of physics.  BC

Archimedes of Syracuse (Greece) discovers the principle of the lever and the principle of buoyancy.



Peter Peregrinus (France), in Epistola de Magnete, identifies magnetic poles.



Galileo Galilei (Italy) discovers that the period of oscillation of a pendulum is independent of its amplitude.



Simon Stevin (Flanders) presents evidence that falling bodies fall at the same rate.



Galileo Galilei (Italy) invents the baro-/thermometer.

445

Appendix: Milestones in the history of science

Table A.2 (continued ) 

William Gilbert (England), in De Magnete, Magnetisque Corporibus, et de Magno Magnete Tellure, describes the magnetic properties of Earth.



Galileo Galilei (Italy) discovers that a free-falling body increases its distance as the square of the time.



Zacharias Janssen and Hans Lippershey (Netherlands) invent the compound microscope.



Galileo Galilei (Italy) presents foundations of mechanics in Discoursi e Dimostrazioni Matematiche, Intorno a Due Nuove Scienze.



Evangelista Torricelli (Italy) invents barometer and creates the first partial vacuum.



Otto von Guericke (Germany) discovers that, in vacuum, sound does not travel, fire is extinguished and animals stop breathing.



Blaise Pascal (France) states that pressure on an enclosed fluid is transmitted without reduction throughout the fluid.



Otto von Guericke (Germany) demonstrates the force of air pressure, using teams of horses to try to pull apart metal hemispheres held together by a partial vacuum.



Robert Hooke (England), in Micrographia, argues that light is a vibration rather than a stream of particles.



Erasmus Bartholin (Denmark) describes double refraction, the apparent doubling of images when seen through a crystal.



Christiaan Huygens (Netherlands) develops a wave theory of light (published in ).



Isaac Newton (England) describes the light spectrum, and discovers that white light is a mixture of colours.



Isaac Newton (England), in Philosophiae Naturalis Principia Mathematica, states the law of universal gravitation and the laws of motion.



Joseph Sauveur (France) describes the production of tones by the vibration of strings and introduces the term acoustic.



Daniel G. Fahrenheit (Netherlands) invents the mercury thermometer and the scale of temperature (which is called the Fahrenheit scale).



Daniel Bernoulli (Switzerland), in Hydrodynamica, states that an increase in the speed of a fluid occurs simultaneously with a decrease in pressure or a decrease in the fluid’s potential energy (known as Bernoulli’s Principle).



Anders Celsius (Sweden) invents the scale of temperature, which is called the Celsius scale.



Ewald G. von Kleist (Germany) and Pieter van Musschenbroek (Netherlands) independently invent a practical device for storing an electric charge, which is called Leyden jar. (continued )

446

Appendix: Milestones in the history of science

Table A.2 (continued ) 

Benjamin Franklin (USA) discovers that lightning is a form of electricity.



Henry Cavendish and Nevil Maskelyne (England) measure the gravitational constant and estimate the mass of Earth.



Benjamin Thompson (Germany) demonstrates that heat is a form of motion (energy) rather than a substance.



F. William Herschel (England) discovers infrared radiation and its relation to heat.



Thomas Young (England) uses diffraction and interference patterns to demonstrate that light has wavelike characteristics.



Johann W. Ritter (Germany) discovers ultraviolet light.



Etienne-Louis Malus (France) discovers the polarisation of light.



Hans C. Ørsted (Denmark) demonstrates that electricity and magnetism are related.



Johann S. C. Schweigger (Germany) invents the needle galvanometer.



Michael Faraday (England), in the paper “On Some New Electromagnetic Motions”, reports his discovery that electrical forces can produce motion, and describes the principle of the electric motor.



Jean-Baptiste J. Fourier (France), in Theorie Analytique de la Chaleur, presents the mathematical study of heat flow.



William Sturgeon (England) invents the electromagnet.



Nicolas L. S. Carnot (France), in Reflexions sur la puissance motrice du feu, provides the first analysis of steam engine efficiency.



André-Marie Ampère (France) publishes a mathematical expression for the Ørsted’s relationship between magnetism and electricity.



Georg S. Ohm (Germany), in Die galvanische Kette, mathematisch bearbeitet, states that an electrical current is equal to the ratio of the voltage to the resistance (what is known today as Ohm’s law).



Robert Brown (Scotland) discovers continuous random movement of microscopic solid particles when suspended in a fluid (known as Brownian motion).



Joseph Henry (USA) uses insulated wire to create an electromagnet able to lift a ton of iron.



Michael Faraday (England) and Joseph Henry (USA) independently discover that a changing magnetic force can generate electricity, i.e. the phenomenon of electromagnetic induction.



Michael Faraday (England) discovers the basic laws of electrolysis that govern chemical reaction caused by passing electric current through a liquid or solution.

447

Appendix: Milestones in the history of science

Table A.2 (continued ) 

Christian A. Doppler (Germany) discovers that the frequency of waves emitted by a moving source changes when the source moves relative to the observer (what is called Doppler effect).



Julius R. von Mayer and Carl Mohr (Germany) develop early formulation of the concept of conservation of energy.



James P. Joule (England) discovers that heat is produced when an electric current flows through resistance (what is known today as Joule’s first law).



Hermann L. F. von Helmholtz (Germany) states that the total amount of energy in an isolated system does not change (what is called the first law of thermodynamics).



William Thomson (alias Lord Kelvin) (Scotland) defines absolute zero and proposes a scale of temperature, which is known as Kelvin scale.



George G. Stokes (England) discovers the terminal velocity of objects falling through viscous liquid.



Rudolf J. E. Clausius (Germany) discovers that the disorder of a closed system increases with time (known as second law of thermodynamics).



Jean B. L. Foucault (France) demonstrates the rotation of Earth using a large pendulum, which is called the Foucault pendulum.



James Clerk Maxwell (Scotland), in the paper “A Dynamical Theory of the Electromagnetic Field”, presents differential equations describing the behaviour of electric and magnetic fields, which are known as Maxwell’s equations.



Eugen Goldstein (Germany) discovers cathode rays, streams of fluorescence flowing from the negatively charged electrode in an evacuated tube.



Josef Stefan (Austria) discovers that the radiation of a body is proportional to the fourth power of its absolute temperature (what is known today as Stefan’s law).



Edwin H. Hall (USA) discovers the appearance of a potential difference across a conductor carrying an electric current when a magnetic field is applied in a direction perpendicular to that of the current flow (what is called Hall effect).



Heinrich R. Hertz (Germany) produces radio waves in the laboratory, confirming the Maxwell’s theory of electromagnetic field.



Albert A. Michelson and Edward W. Morley (USA) fail to confirm the existence of ether and demonstrate that the speed of light is constant.



Konstantin E. Tsiolkovsky (Russia) begins theoretical work on rocket propulsion and space flight.



Wilhelm C. Röntgen (Germany) discovers X-rays.



Hendrik A. Lorentz (Netherlands) puts forward a hypothesis that mass increases with velocity. (continued )

448

Appendix: Milestones in the history of science

Table A.2 (continued ) 

Antoine H. Becquerel (France) discovers spontaneous radioactivity.



Joseph J. Thomson (England) discovers the electron (the first subatomic particle).



Marie S. Skłodowska-Curie and Pierre Curie (France) demonstrate that uranium radiation is an atomic phenomenon, not a molecular phenomenon, and introduce the term radioactivity.



Ernest Rutherford (England) discovers two types of uranium radiation, alpha rays (massive and positively charged) and beta rays (lighter and negatively charged).



Antoine H. Becquerel (France) demonstrates that the process of radioactivity consists partly of particles identical to the electron.



Max K. E. L. Planck (Germany) introduces a constant called Planck’s constant, and the concept that energy is radiated in discrete packets called quanta.



Ernest Rutherford and Frederick Soddy (England) demonstrate that uranium and thorium break down into a series of radioactive intermediate elements.



Albert Einstein (Switzerland), in the paper “Zur Elektrodynamik bewegter Korpen”, introduces the special theory of relativity and deduces that the mass of a body m is a measure of its energy content E=mc, where c is light velocity.



W. Hermann Nernst (Germany) states that all bodies at absolute zero would have the same entropy (what is called third law of thermodynamics).



Ernest Rutherford (England) and Johannes (Hans) W. Geiger (Germany) invent an alpha-particle counter.



Ernest Rutherford (England) proposes the concept of the atomic nucleus.



Heike Kamerlingh Onnes (Netherlands) discovers superconductivity – the disappearance of electrical resistance in certain substances as their temperature approaches absolute zero.



Victor F. Hess (USA) discovers cosmic rays.



Niels H. D. Bohr (Denmark) applies quantum theory to the structure of the atom, describing electron orbits and electron excitation and deexcitation.



Robert A. Millikan (USA) experimentally determines the charge of an electron.



Albert Einstein (Germany) presents the general theory of relativity, which describes space as a curved field modified locally by the existence of mass.



Arthur H. Compton (USA) discovers that the wavelength of X-rays and gamma rays increases following collisions with electrons (what is called Compton effect).



Wolfgang E. Pauli (Germany) develops the exclusion principle, stating that in a given atom no two electrons can have the identical set of four quantum numbers.

449

Appendix: Milestones in the history of science

Table A.2 (continued ) 

Erwin R. J. A. Schrödinger (Austria) develops the equations of wave mechanics, which are called Schrödinger equations.



Werner K. Heisenberg (Germany), in the paper “On the Intuitive Content of Quantum Kinematics and Mechanics”, introduces the uncertainty principle.



Niels H. D. Bohr (Denmark), in the paper “The Philosophical Foundations of Quantum Theory”, introduces the principle of complementarity, arguing that different but complementary models may be needed to explain the full range of atomic and subatomic phenomena.



Paul A. M. Dirac (England) predicts the existence of antimatter.



John D. Cockroft (England) achieves a nuclear reaction by splitting the atomic nucleus.



James Chadwick (England) discovers the neutron.



Ernst A. F. Ruska and Reinhold Rüdenberg (Germany) invent an electron microscope that is more powerful than a conventional light microscope.



Pavel A. Cherenkov and collaborators (Russia) discover that the wave of light produced by particles apparently move faster than the speed of light in a medium other than a vacuum (what is called Cherenkov effect).



Enrico Fermi (USA) achieves the first nuclear fission reaction.



Hideki Yukawa (Japan) publishes his theory of mesons, which explains the interaction between protons and neutrons.



Otto Hahn and Friedrich W. Strassman (Germany) split an atomic nucleus into two parts by bombarding uranium- with neutrons.



James Hillier and Albert Prebus (Canada) build an electron microscope which can magnify  times.



Enrico Fermi, Walter H. Zinn and Herbert Anderson (USA) achieve the first sustained nuclear reaction.



Edwin M. McMillan (USA) and Vladimir I. Veksler (Russia) independently invent a particle accelerator called synchrotron.



John Bardeen, Walter H. Brattain and William B. Shockley (USA) discover the transistor effect.



Richard P. Feynman (USA) develops the theory of quantum electrodynamics.



Erwin W. Müller (USA) builds an electron microscope which can magnify to a million times and enable one to see the outline of atoms.



Clyde L. Cowan and Frederick Reines (USA) discover the electron neutrino, an elementary particle which has no net electric charge. (continued )

450

Appendix: Milestones in the history of science

Table A.2 (continued ) 

Murray Gell-Mann (USA) postulates the existence of quarks, particles that the hadrons are composed of.

 –

Sheldon L. Glashow (USA), Steven Weinberg (USA) and M. Abdus Salam (Pakistan) develop the standard model of particle physics, i.e. a theory concerning the electromagnetic, weak and strong nuclear interactions, as well as a classification of all subatomic particles.



Klaus von Klitzing (Germany) discovers a quantum-mechanical version of the Hall effect.



A research team of CERN (Switzerland) discovers the Higgs boson, i.e., a very unstable elementary particle with no spin and no electric charge.



A research team of Laser Interferometer Gravitational Wave Observatory (USA) makes first direct observation of gravitational waves (minute distortions of space-time, predicted by the Einstein’s theory of general relativity).

Table A.3: Milestones in the history of chemistry.  BC

Democritus and Leucippus (Greece) put forward a hypothesis that matter is composed of atoms.



Arab alchemists obtain concentrated alcohol by distilling wine.



Libavius (Germany) publishes the first chemistry textbook Alchemia containing detailed descriptions of various chemical methods.



Jan van Helmont (Flanders) introduces the term gas for any air-like compressible fluid.



Robert Boyle (England) states that the volume occupied by a fixed mass of gas in a container is inversely proportional to the pressure it exerts.



Henry Cavendish (England) discovers hydrogen.



Antoine-Laurent de Lavoisier (France) discovers that air is absorbed during combustion and that diamond consists of carbon.



Joseph Priestley (England) and Carl Scheele (Sweden) independently discover oxygen.



Antoine-Laurent de Lavoisier (France) correctly explains combustion (discrediting phlogiston theory).



Antoine-Laurent de Lavoisier (France) explains the role of oxygen in the process of combustion.



Henry Cavendish (England) discovers the chemical composition of water.

451

Appendix: Milestones in the history of science

Table A.3 (continued ) 

Antoine-Laurent de Lavoisier (France), in Traite Elementaire de Chemie, formulates the law of conservation of matter.



William Nicholson and Anthony Carlisle (England) discover electrolysis.



John Dalton (England) publishes the modern formulation of atomic theory and introduces the concept of atomic weight.



Amadeo C. Avogadro (Italy) puts forward a hypothesis that all gases at the same volume, pressure and temperature are made up of the same number of particles.



Joseph von Fraunhofer (Germany) discovers that the relative positions of spectral lines are constant.



Joseph von Fraunhofer (Germany) invents the diffraction grating for analysis of spectra.



Friedrich Wöhler (Germany) synthesises an organic compound (urea) from an inorganic compound.



Christian F. Schönbein (Germany) discovers ozone.



Friedrich A. Kekulé (Germany) establishes two key facts of organic chemistry: carbon has a valence of four and carbon atoms can chemically combine with one another.



Archibald S. Couper (Scotland) and Friedrich Kekule (Germany) develop a graphical representation of organic molecular structure.



Gustav R. Kirchhoff and Robert W. E. Bunsen (Germany) discover that each element is associated with characteristic spectral lines.



James Clerk Maxwell (Scotland) develops the first extensive mathematical kinetic theory of gases, later augmented in collaboration with Ludwig E. Boltzmann (Austria).



Friedrich A. Kekulé (Germany) discovers the structure of the benzene ring.



Dmitri I. Mendeleev (Russia) publishes a periodic table of the elements, including the prediction of undiscovered elements.



Svante A. Arrhenius (Sweden) introduces the theory of ionic dissociation.



Johann J. Balmer (Switzerland) develops a formula for the wavelengths at which hydrogen atoms radiate light.



William Ramsay (Scotland) and Per T. Cleve (Sweden) independently discover helium on Earth.



James Dewar (Scotland) invents a method of producing liquid hydrogen in quantity.



Frederic S. Kipping (England) discovers silicon-containing synthetic polymers called silicones. (continued )

452

Appendix: Milestones in the history of science

Table A.3 (continued ) 

Mikhail S. Tsvet (Russia) invents a technique for analysis of complex mixtures, called chromatography.



Harold C. Urey (USA) discovers heavy hydrogen called deuterium.



W. Norman Haworth (England) and Tadeus Reichstein (Switzerland) synthesise vitamin C.



Irène and Frédéric Joliot-Curie (France) develop the first artificial isotope (a radioactive form of phosphorus).



Wallace H. Carothers (USA) invents an artificial fibre called nylon.



Eugene J. Houdry (USA) develops a method for industrial-scale catalytic cracking of petroleum feedstocks.



Albert Hofmann and Arthur Stoll (Switzerland) synthesize LSD, later recognised as a psychedelic drug (i.e. a drug whose primary action is to alter cognition and perception).

–

Felix Bloch and Edward M. Purcell (USA) develop a new analytical technique, based on nuclear magnetic resonance, important for organic chemistry.



Willard F. Libby (USA) discovers a technique for determining the age of very old things, based on the use of a radioactive isotope of carbon C.



Alan Walsh (England) develops the atomic absorption spectroscopy, a new analytical technique that allows one to measure the concentrations of the components of a mixture.



Hanns-Peter Boehm, Ralph Setton and Eberhard Stumpp (Germany) produce graphene being an allotrope of carbon in the form of a two-dimensional lattice in which one atom forms each vertex.



John A. Pople with collaborators (USA) develops a computer program, called Gaussian, dedicated to computational chemistry.



Harold W. Kroto, Robert F. Curl and Richard E. Smalley (USA) discover and synthesise large carbon molecules, called fullerenes, now being a material of importance for nanotechnology.



Eric A. Cornell and Carl E. Wieman (USA) produce the first Bose–Einstein condensate, a substance that displays quantum-mechanical properties at the macroscopic scale.

–

Researchers from all over the world synthesise  new chemical elements (with atomic numbers –) using colliders and bombardment techniques.



Stefan W. Hell (Germany), Robert. E. Betzig and William E. Moerner (USA) develop nanoscale light microscopy which allows one to observe small molecules, viruses, proteins, etc.

453

Appendix: Milestones in the history of science

Table A.4: Milestones in the history of earth science and paleontology.  BC

Pythagoras of Samos (Greece) argues that Earth is spherical.

ca.  BC

Xenophanes of Colophon (Greece) argues that fossils of marine organisms show that dry land was once under water.

 BC

Pytheas of Massilia (Greece) describes the ocean tides and their relationship with Moon.

 BC

Eratosthenes (Alexandria) estimates the circumference and diameter of Earth with % uncertainty.



Georgius Agricola (Germany), in De Natura Fossilium, classifies minerals.



Robert Hooke (England) proposes that fossils can be used as a source of information about the Earth’s history.



Niels Steensen (Dennmark) argues that fossils are organic remains deposited in water.



Jean-Etienne Guettard (France) prepares the first true geological maps, showing rocks and minerals arranged in bands.



John Michell (England) writes “Essay on the Causes and Phenomena of Earthquakes”, beginning the systematic study of seismology.



Benjamin Franklin (USA) prepares the first scientific chart of the Gulf Stream.



The fossilised bones of a huge animal (later called Mosasaur) are found in a quarry near Maastricht (Netherlands).



Horace-Bénédict de Saussure (Switzerland) writes Voyage dans les Alpes describing his geological, meteorological and botanical studies, and coining the term geology.



James Hutton (Scotland), in the paper “Concerning the System of the Earth”, presents the view that the Earth’s development always follows the same natural laws and processes.



Georges L. N. F. Cuvier (France) presents his paper (published in  under the title “Mémoires sur les espèces d’éléphants vivants et fossils”) on skeletal remains of elephants and mammoth fossils.



Georges L. N. F. Cuvier and Alexandre Brongniart (France) publish preliminary results of their survey of the geology of the Paris Basin, referring to the fossils found in different strata.



William Smith (England) prepares the first geological map showing relationships on a large scale, including England, Wales and part of Scotland.



William Buckland (England) finds a human skeleton with mammoth remains at Paviland Cave on the Gower Peninsula, arguing that both species could coexist. (continued )

454

Appendix: Milestones in the history of science

Table A.4 (continued ) 

Charles Lyell (England), in Principles of Geology, argues that geological formations are created over millions of years.



Gideon A. Mantell (England) publishes the paper “The Age of Reptiles” summarising evidence of an extended period during which large reptiles had been the dominant animals.



Jean L. R. Agassiz (USA), in the paper “Discourse at Neuchatel”, presents the Ice Age theory.



Richard Owen (England) introduces the term dinosaur.



Johann C. Fuhlrott and Hermann Schaaffhausen (Germany) recognise the fossils found in Neanderthal as representative of an extinct human species which is called today Homo neanderthalensis.



Joseph M. Leidy (USA) describes the first skeleton of a dinosaur which is called Hadrosaurus.



John Milne (England) invents the first precise seismograph.



Edward D. Cope (USA), in The Vertebrata of the Tertiary Formations of the West, reports the discovery of the first complete remains of dinosaurs of the Cretaceous.



M. Eugène F. T. Dubois (Netherlands) discovers in Indonesia fossils of the socalled Java Man, today classified as Homo erectus.



Oliver Heaviside (England) and Arthur Kennelly (USA) independently predict the existence of a layer in the atmosphere that permits long-distance radio transmission (confirmed in  by Edward Appleton and named ionosphere).



Maurice P. A. C. Fabry (France) discovers ozone in the upper atmosphere and demonstrates that it filters out solar ultraviolet radiation.



Alfred L. Wegener (Germany), in Die Entstehung der Kontinente und Ozeane, presents evidence for a primordial continent, Pangaea, and subsequent continental drift.



Raymond A. Dart (Australia) recognises the fossils of the so-called Taung Child, found in South Africa, as representative of an extinct human species which is called Australopithecus africanus.



Charles D. Richter (USA) invents the scale for measuring the magnitude of earthquakes which is called Richter scale.



Karl H. Strunz (Germany), by publishing Mineralogische Tabellen, introduces a classification scheme for categorising minerals, the scheme based upon their chemical composition.

–

Felix A. Vening Meinesz (Netherlands) and collaborators carry out investigations showing gravity anomalies, and indicating that the Earth’s crust is moving.

455

Appendix: Milestones in the history of science

Table A.4 (continued ) 

William M. Ewing and Bruce C. Heezen (USA) discover the Great Global Rift running along the Mid-Atlantic Ridge.



Frederick Vine and Drummond Matthews (England) put forward a hypothesis explaining the stripes of magnetised rocks with alternating magnetic polarities running parallel to mid-ocean ridges.



Donald C. Johanson and Tom Gray (USA) discover in Ethiopia a . million-year-old female hominid fossil (that is % complete) and name it Lucy.



Thomas C. Hanks (USA) and Hiroo Kanamori (Japan) develop the moment magnitude scale which replaced the Richter magnitude scale for measurement of the relative strength of earthquakes.



Luis Alvarez and Walter Alvarez (USA) put forward a hypothesis that the extinction of the dinosaurs at the end of the Cretaceous Period (ca.  million years ago) was caused by the impact of a large extra-terrestrial object.



Joseph J. Sepkoski and David M. Raup (USA) publish a statistical analysis of the fossil record of marine invertebrates, the analysis that shows a pattern of repeated mass extinctions.



Johannes G. M. Thewissen (USA) and Sayed Taseer Hussain (Pakistan) discover in Pakistan the fossils of the amphibious whale ancestor which is called Ambulocetus.



Jonathan Bloch with international collaborators discover in Colombia the fossils of a giant snake which is now called Titanoboa.



Lee R. Berger with  international collaborators suggest that the – millionyear-old fossils, discovered by Rick Hunter and Steven Tucker in  in South Africa, belong to an extinct species of hominin to be called Homo naledi.

Table A.5: Milestones in the history of biology.  BC

Alcmaeon (Greece) conducts dissections on animals, and perhaps on a human cadaver, for scientific purposes.

 BC

Aristotle (Greece) creates a classification of animals and plants.

 BC

Herophilos (Alexandria), through dissection and vivisection, acquires more detailed knowledge of the functions of internal organs, nerves and the brain.



Pliny the Elder (Rome) summarises the knowledge on natural world in the  volumes of Historia Naturalis.



Aelius Galenus (Greece) dissects animals, demonstrating a variety of physiological processes and finding experimental physiology. (continued )

456

Appendix: Milestones in the history of science

Table A.5 (continued ) 

Andreas Vesalius (Italy) writes De Humani Corporis Fabrica, an anatomy textbook, based on dissection, that is more scientifically exact than the writings of Aelius Galenus.



Andrea Cesalpino (Italy), in De Plantis, introduces an early system of plant classification.



William Harvey (England), in Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus, describes the function of the heart and the system of blood circulation.



Robert Hooke (England), in Micrographia, introduces the term cell and describes the cells.



Antonie P. van Leeuwenhoek (Netherlands) discovers microorganisms.



Carl Linnaeus (Sweden), in Systema Naturae, uses systematic principles for defining the genera and species of organisms.



Jan Ingenhousz (Netherlands) describes photosynthesis.



Jean-Baptiste de Lamarck (France), in Systeme des Animaux sans Vertebres, founds modern invertebrate zoology of invertebrates.



Robert Brown (Scotland) discovers that the cell nucleus is a general feature of all plant cells.



Theodor Schwann (Germany), in Mikroskopische Untersuchungen, and Hubert Schleiden (Germany), in Beiträge zur Phytogenesis, argue that cells are the fundamental organic units.



Charles R. Darwin (England), in On the Origin of Species, introduces the theory of evolution through the mechanism of natural selection.



Pierre P. Broca (France) introduces the theory of localisation of the brain’s speech centre.



Gregor J. Mendel (Austria), in the paper “Versuche über Pflanzen-Hybriden”, founds the study of genetics.



Francis Galton (England), in Hereditary Genius, applies the Darwin’s theory of evolution to man’s mental inheritance, arguing that individual talents are genetically transmitted.



Johann F. Miescher (Switzerland) obtains the first crude purification of deoxyribonucleic acid (DNA) from the cells’ nuclei, and identifies its difference from proteins.



Francis Galton (England) introduces the concept of eugenics.



Camillo Golgi (Italy) and Santiago Ramón y Cajal (Spain) describe the cellular structure of the brain and spinal cord.

457

Appendix: Milestones in the history of science

Table A.5 (continued ) 

Karl Landsteiner (Austria) discovers blood types.



Wilhelm Johannsen (Denmark) introduces the term gene for the unit of inheritance and distinguishes between genotype and phenotype.



Thomas Morgan and Alfred Sturtevant (USA) design the first chromosome map.



Thomas Morgan, Alfred Sturtevant, Hermann Müller and Calvin Bridges (USA) suggest that chromosomes contain genes that determine heredity.



Roy Chapman Andrews (USA) finds fossilised dinosaur eggs in Bayanzag (Mongolia); they provide irrefutable proof that dinosaurs belonged to a species of amphibians.



Thomas Rivera (USA) discovers the difference between bacteria and viruses.



Hans Berger (Germany) invents electroencephalography.



Gerhard J. P. Domagk (Germany) produces the first medicine against bacterial infections (sulfonamidochrysoidine).



Aleksandr I. Oparin (Soviet Union) puts forward a hypothesis that life originated by itself in the Earth’s atmosphere billions of years ago.



Dorothy M. Crowfoot Hodgkin with collaborators (England) discover the structure of penicillin.



Stanley L. Miller and Harold C. Urey (USA) emulate the conditions thought to be present on the early Earth to test the chemical origin of life (i.e. the Oparin’s hypothesis).



Rosalind E. Franklin (England) creates a photo (Photograph ) showing the helical shape of DNA.



James D. Watson and Francis H. C. Crick (England) discover the double helix structure of DNA.



John B. Gurdon (England) successfully clones a frog using intact nuclei from the somatic cells of a Xenopus tadpole.



Kenneth S. Norris with collaborators (USA) discovers that dolphins use sound signals to orient themselves under water, just like bats in the air.



Marshall W. Nirenberg (USA) “cracks” the genetic code, and explains how it operates in protein synthesis.



Elso B. Sterrenberg Barghoorn (USA) discovers fossilised single-celled organisms that lived on Earth . billion years ago.



Frederick Sanger (England) develops rapid DNA sequencing technique.



James F. Gusella with collaborators (USA) map the first genetic disease, viz. identifying the gene responsible for the Huntington’s disease. (continued )

458

Appendix: Milestones in the history of science

Table A.5 (continued ) 

Michael W. Bevan, Richard B. Flavell and Mary-Dell Chilton(USA) develop the first genetically engineered plant (tobacco).



Ian Wilmut and Keith Campbell with collaborators (Scotland) clone the first adult mammal (the female domestic sheep Dolly which lived till ).

–

The international teams of the Human Genome Project encode the human genome, i.e. the complete set of nucleic acid sequence for humans (Homo sapiens).



Paul Cello and Eckard Wimmer (USA) announce in Science the first test-tube synthesis of an organism (poliovirus cDNA) in the absence of a natural template, achieved outside living cells.



Julian Melchiorri (England) invented the first man-made, biologically functioning leaf (capable of absorbing carbon dioxide and light, converting it into oxygen).

Table A.6: Milestones in the history of medical sciences.  BC

Hippocrates (Greece) and his followers develop the empirical study of disease, distancing medicine from religion.



Celsus (Rome) publishes De Medicina – one of the earliest medical texts of Western civilisation.



Dioscorides (Rome) publishes De Materia Medica – the first Western pharmacopoeia covering  plants and , drugs.



Aelius Galenus (Greece) writes medical texts that are treated as authoritative during the next  centuries.



Henri de Mondeville (France), in Chirurgia, advocates the use of sutures, cleansing of wounds, limitation of suppuration and wine dressing for wounds.



Philippus A. T. Bombastus von Hohenheim alias Paracelsus (Germany) pioneers the application of chemistry to physiology, pathology and the treatment of disease.



Girolamo Fracastero (Italy), in De Contagione et Contagiosis Morbis, provides the first explanation of the spread of infectious disease referring to analogues of microbes or germs as a cause.



Richard Lower (England) attempts the first blood transfusion, between dogs.



James Lind (Scotland) uses a controlled dietary study to establish that citrus cures scurvy.



Leopold von Auenbrugger (Austria) introduces the use of percussion for medical diagnosis.

459

Appendix: Milestones in the history of science

Table A.6 (continued ) 

William Withering (England) discovers digitalis.



Matthew Dobson (England) proves that the sweetness of diabetics’ urine is caused by the presence of sugar.



Edward Jenner (England) introduces systematic vaccination for smallpox.



René-Théophile-Hyacinthe Laennec (France) invents the stethoscope.



Justus von Liebig (Germany), Eugene Soubeiran (France) and Samuel Guthrie (USA) independently synthesise chloroform which in  starts to be used as an anaesthetic.



William T. G. Morton (USA) popularises the use of ether as an anaesthetic.



John Snow (England) uses epidemiological data to demonstrate that cholera is spread by contaminated water.



Thomas Addison (England) describes the disease of the adrenal glands, which is called today Addison’s disease.



Hermann L. F. von Helmholtz (Germany) invents the ophthalmoscope.



Alexander Wood (Scotland) and Charles G. Pravaz (France) invent the hypodermic syringe.



Louis Pasteur (France) invents the process of killing bacteria by means of heat, which is called today pasteurisation.



Louis Pasteur (France) gains acceptance for the germ theory of disease.



Casimir Davaine (France) discovers the microorganism that causes anthrax, which is the first linkage of a disease with a specific microorganism.



Claude Bernard (France), in Introduction a l’Etude de la Medecine Experimental, provides foundations of medicine as an empirical science.



H. H. Robert Koch (Germany) introduces steam sterilisation.



William Halsted (USA) conducts the first known human blood transfusion.



H. H. Robert Koch (Germany) isolates the tubercle bacillus.



Rickman J. Godlee (England) surgically removes a brain tumour.



Augustus D. Waller (France) records the electrical activity of the heart.



Daniel H. Williams (USA) conducts the first successful heart surgery on a human organism.



Hermann Strauss (Germany) introduces X-rays for diagnostic purposes.



Christiaan Eijkman (Netherlands) discovers that the beriberi illness is caused by a dietary deficiency. (continued )

460

Appendix: Milestones in the history of science

Table A.6 (continued ) 

Ronald Ross (England) discovers the malaria parasite in the anopheles mosquito.



Willem Einthoven (Netherlands) invents the first practical electrocardiograph.



Frederick G. Hopkins (England) discovers that food contains ingredients essential to life that are neither proteins nor carbohydrates, leading to the discovery of vitamins.



Frank Woodbury (USA) introduces iodine as a disinfectant for wounds.



Elmer V. McCollum and Marguerite Davis (USA) discover and isolate vitamin A.



Katsusaburo Yamagiwa and Koichi Ichikawa (Japan) identify the first carcinogen by exposing rabbits to coal tar.



Frederick G. Banting, Charles H. Best and James B. Collip (Canada) invent a method for isolating insulin and injecting it in patients.



Elmer V. McCollum and Edward Mellanby (USA) discover and isolate vitamin D.



Alexander Fleming (England) discovers penicillin, the first antibiotic.



Philip Wiles (England) replaces a hip with an implant of stainless steel.



Selman A. Waksman, William Feldman, and Corwin Hinshaw (USA) discover streptomycin, the first antibiotic effective in treating tuberculosis.



Willem J. Kolff (USA) invents the dialysis machine.



John Frisch and Francis Bull (USA) initiate the fluoridation of water.



Benjamin M. Duggar and Albert Dornbush (USA) discover the tetracycline group of antibiotics.



Richard Lawler (USA) conducts a successful kidney transplant between two live human organisms.



Jonas E. Salk (USA) starts trials of a vaccine against the disease polio.



Alick Isaacs (England) discovers interferon, a compound produced by the human body to protect against viral infections.



Ian MacDonald (Scotland) performs experiments with the use of ultrasound for studying patients.



Maurice R. Hilleman (USA) develops the first successful vaccine against the virus that causes measles.



Godfrey N. Hounsfield (England) and Allan Cormack (USA) invent computed tomography for studying patients.



Louise J. Brown (England) is the first person born after conception by in vitro fertilisation.

Appendix: Milestones in the history of science

461

Table A.6 (continued ) 

Luc A. Montagnier (France) and Robert Charles Gallo (USA) claim independently that they found the human immunodeficiency virus (HIV) that causes the acquired immune deficiency syndrome (AIDS).



Harald zur Hausen (Germany) with international collaborators develop the Human papilloma virus vaccine to protect women against cervical cancer.



Four institutes (USA and Germany) with two pharmaceutical companies announce in Nature the discovery of Teixobactin, the first new antibiotic for  years.

References G. Aad, et al., “Combined Measurement of the Higgs Boson Mass in pp Collisions at sqrt(s) = 7 and 8 TeV with the ATLAS and CMS Experiments”, Physical Review Letters, 2015, Vol. 114, pp. 1–33 of the paper #191803. ACM Code of Ethics and Professional Conduct, Association for Computing Machinery, 1992, http:// www.acm.org/about/code-of-ethics [2018-01-16]. Act of February 4, 1994, on Copyright and Neighboring Rights (as amended up to October 21, 2010), Parliament of the Republic of Poland, http://www.wipo.int/wipolex/en/text.jsp? file_id=129377 [2018-04-12]. Act of June 30, 2000, on Industrial Property, 2007 (as amended up to June 29, Parliament of the Republic of Poland), http://www.wipo.int/wipolex/en/text.jsp?file_id=194987 [2018-04-17]. S. A. Adamo, “Parasites: Evolution’s Neurobiologists”, Journal of Experimental Biology, 2013, Vol. 216, No. 1, pp. 3–10. E. Agazzi, Right, Wrong and Science: The Ethical Dimensions of the Techno-scientific Enterprise, Rodopi B.V., Amsterdam – New York 2004 (edited by C. Dilworth). C. C. Aggarwal, Outlier Analysis, Springer, Cham (Switzerland) 2017. K. Ajdukiewicz, “Co to Jest Wolność Nauki?”, Życie Nauki, 1946, Vol. I, No. 6, pp. 417–426. K. Ajdukiewicz, “O Wolności Nauki”, Nauka Polska, 1957, Vol. 19, No. 3, pp. 1–20. F. Allhoff, P. Lin, J. Moor, J. Weckert (Eds.), Nanoethics – The Ethical and Social Implications of Nanotechnology, Wiley & Sons, Hoboken (USA) 2007. A. Almossawi, A. Giraldo, An Illustrated Book of Bad Arguments, JasperCollins Pub., New York 2014. E. Alpaydin, Introduction to Machine Learning, The MIT Press, Cambridge (USA) – London (UK) 2010. Y. Amar, Les Dix Commandements interiéurs, Éditions Albin Michel, Paris 2004. F. Amigoni, V. Schiaffonati, “Ethics for Robots as Experimental Technologies: Pairing Anticipation with Exploration to Evaluate the Social Impact of Robotics”, IEEE Robotics & Automation Magazine, 2018, Vol. 25, No. 1, pp. 30–36. R. Anderson, “Is Copyright Piracy Morally Wrong or Merely Illegal? The Malum Prohibitum/Malum in Se Conundrum”, Scholary Kitchen, April 30, 2018, https://scholarlykitchen.sspnet.org/ 2018/04/30/copyright-piracy-morally-wrong-merely-illegal-malum-prohibitum-malum-se-co nundrum/ [2018-06-20]. Aristotle, Nicomachean Ethics, Batoche Books, Kitchener 1999 (translated from Greek by W. D. Ross), http://www.efm.bris.ac.uk/het/aristotle/ethics.pdf [2017-11-30]. A. Arnauld, P. Nicole, J. V. Buroker, Logic or the Art of Thinking, Cambridge University Press, Cambridge (UK) 1996. K. J. Aström, R. M. Murray, Feedback Systems: An Introduction for Scientists and Engineers, University Press, Princeton (NJ, USA) 2008. L. Asveled, S. Roeser (Eds.), The Ethics of Technological Risk, Earthscan-Sterling, London 2009. N. Athanassoulis, “Virtue Ethics”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), http://www.iep.utm.edu/virtue/ [2017-10-27]. R. Audi (Ed.), The Cambridge Dictionary of Philosophy, Cambridge University Press, Cambridge (UK) 1999. B. Aune, Metaphysics: The Elements, University of Minnesota Press, Minneapolis 1985. M. Aurelius, Meditations, Modern Library, New York 2002 (translated from Greek by G. Hays), http://seinfeld.co/library/meditations.pdf [2017-12-27]. M. Averill, “Intellectual Property”, [in] Encyclopedia of Science, Technology, and Ethics (Ed. C. Mitcham), Thomson Gale, Farmington Hills 2005, pp. 1030–1034. C. Babbage, Reflections on the Decline of Science in England, And on Some of its Causes, B. Fellowes, London 1830. https://doi.org/10.1515/9783110584066-022

464

References

J. Baggini, P. S. Fosl, The Ethics Toolkit: A Compendium of Ethical Concepts and Methods, Blackwell, Oxfford (UK) 2007. J. Baggini, P. S. Fosl, The Philosopher’s Toolkit: A Compendium of Philosophical Concepts and Methods, Blackwell, Oxfford (UK) 2010. P. Ball, “Free Software”, [in] Encyclopedia of Science, Technology, and Ethics (Ed. C. Mitcham), Thomson Gale, Farmington Hills 2005, pp. 793–796. N. Barber (Ed.), Encyclopedia of Ethics in Science and Technology, Facts On File, New York 2002. M. Barinaga, “Genentech, UC Settle Suit for $200 Million”, Science, 1999, Vol. 286, No. 5445, p. 1655. A. Barwicz, R. Z. Morawski, M. Ben Slima, Apparatus and Method for Light Spectrum Measurement, US Patent #6,002,479 issued on December 14, 1999. M. J. Bates, “Information and knowledge: An Evolutionary Framework for Information Science”, Information Research, 2005, Vol. 10, No. 4, http://informationr.net/ir/10-4/paper239.html [2010.05.11]. R. W. Batterman, “On the Explanatory Role of Mathematics in Empirical Science”, The British Journal for the Philosophy of Science, 2010, Vol. 61, pp. 1–25. G. D. Baura, Engineering Ethics: An Industrial Perspective, Elsevier Academic Press, Burlington – San Diego – London 2006. P. C. Baveye, “Learned Publishing: Who Still Has Time to Read?”, Learned Publishing, 2014, Vol. 27, No. 1, pp. 48–51. W. Bechtel, A. Abrahamsen, “Explanation: A Mechanistic Alternative”, Studies in History and Philosophy of the Biological and Biomedical Sciences, 2005, Vol. 36, pp. 421–441. L. C. Becker, C. B. Becker (Eds.), Encyclopedia of Ethics, Vol. 1–3, Routledge, Abingdon (UK) 2001 (2nd edition). J. Bentham, An Introduction to the Principles of Morals and Legislation, Jonathan Bennett 2017 (first published in 1789), http://www.earlymoderntexts.com/assets/pdfs/bentham1780.pdf[2017-1209]. M. Berger, “Everything you Ever Wanted to know About Predatory Publishing But Were Afraid to Ask”, Proc. ACRL Conference of American Library Association (Baltimore, USA, March 22-25, 2017), pp. 206–217. P. Besnard, A. Hunter, Elements of Argumentation, MIT Press, Cambridge (USA) – London (UK) 2008. P. Biber, Die berechnete Liebe, Radio SRF2 Kultur (Podcast) Sendung von 4. Mai 2011, https:// www.srf.ch/sendungen/kontext/die-berechnete-liebe [2018-05-05]. S. A. Billings, Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains, Wiley & Sons, Chichester (UK) 2013. M. Binswanger, Sinnlose Wettbewerbe – Warum wir immer mehr Unsinn produzieren, Herder Verlag, Freiburg im Breisgau 2010. Biotechnology Risk Assessment Research Grants Program, United States Department of Agriculture, National Institute of Food and Agriculture, https://nifa.usda.gov/funding-opportunity/biotech nology-risk-assessment-research-grants-program-brag [2018-06-13]. A. Bird, “Eliminative Abduction: Examples From Medicine”, Studies in History and Philosophy of Science, 2010, Vol. 41, pp. 345–352. D. Birnbacher, Verantwortung Für Zukünftige Generationen, Reclam Verlag, Stuttgart 1988. E. Black, War Against the Weak: Eugenics and America’s Campaign to Create a Master Race, Dialog Press, Washington D.C. 2012. S. Blackburn, “Ethics, Science, and Religion”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010, pp. 253–262. S. Blackburn, Think: A Compelling Introduction to Philosophy, Oxford University Press, Oxford (UK) 1999.

References

465

B. Blum, “The Lifespan of a Lie”, Medium, June 7, 2018, https://medium.com/s/trustissues/the-life span-of-a-lie-d869212b1f62 [2018-07-06]. M. Boccia, L. Piccardi, P. Guariglia, “The Meditative Mind: A Comprehensive Meta-Analysis of MRI Studies”, BioMed Research International, 2015, No. 419808, pp. 1–11. J. Bohannon, “Who’s Afraid of Peer Review?”, Science, 2013, Vol. 342, No. 6154, pp. 60–65. J. Bohannon, “Who’s Downloading Pirated Papers? Everyone”, Science, 2016, Vol. 352, No. 6285, pp. 508–512. A. Bokulich, “How Scientific Models Can Explain”, Synthese, 2011, Vol. 180, No. 1, pp. 33–45. A. Bokulich, “Models and Explanation”, [in] Springer Handbook of Model-Based Science (Eds. L. Magnani, T. Bertolotti), Springer, Cham (Switzerland) 2017, pp. 103–118. M. Boudry, S. Blancke, M. Pigliucci, “What Makes Weird Beliefs Thrive? The Epidemiology of Pseudoscience”, Philosophical Psychology, 2015, Vol. 28, No. 8, pp. 1177–1198. Y. Boutalis, D. Theodoridis, T. Kottas, M. A. Christodoulou, System Identification and Adaptive Control, Springer, Cham (Switzerland) 2014. S. G. Bradley, “Managing Competing Interests”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 159–185. I. Bratko, “Discovery of Abstract Concepts by a Robot”, Proc. 13th International Conference on Discovery Science (Canberra, Australia, October 6–8, 2010), pp. 372–379. A. Brenner, Raison Scientifique et Valeurs Humaines – Essai Sur Les Critères Du Choix Objectif, Presses Universitaires de France, Paris 2011. P. W. Bridgman, The Logic of Modern Physics, MacMillan, New York 1927, https://www.marxists. org/reference/subject/philosophy/works/us/bridgman.htm [2017-06-12]. W. J. Broad, N. J. Wade, Betrayers of the Truth: Fraud And Deceit in the Halls of Science, Simon & Schuster, New York 1982. E. D. Buckner, “The Myth of Ockham’s Razor”, The Logic Museum, 2006, http://www.logicmuseum. com/authors/other/mythofockham.htm [2018-07-27]. Bureau International des Poids et Mesures, A concise summary of the International System of Units, Paris 2006, http://www.bipm.org/en/publications/si-brochure/ [2017-06-08]. Bureau International des Poids et Mesures, The International System of Units (SI), Paris 2016 (draft of 9th edition), http://www.bipm.org/en/publications/si-brochure/ [2017-06-08]. T. W. Bynum, “The Historical Roots of Information and Computer Ethics”, [in] The Cambridge Handbook of Information and Computer Ethics (Ed. L. Floridi), Cambridge University Press, Cambridge (UK) 2010, pp. 20–38. L. Campbell, W. Garnett, The Life of James Clerk Maxwell: With a Selection from His Correspondence and Occasional Writings and a Sketch of His Contributions to Science, MacMillan & Co., London 1882. T. Campbell, “Rights”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010. A. Camus, “Sur une philosophie de l’expression”, Poésie, 1944, Vol. 44, No. 17, p. 22. R. Carnap, Logical Foundations of Probability, University of Chicago Press, Chicago 1950. N. Cartwright, How the Laws of Physics Lie, Oxford University Press, Oxford (UK) 1983. A. M. Chakrabarty, Microorganisms Having Multiple Compatible Degradative Energy-Generating Plasmids and Preparation Thereof, US Patent #4,259,444A issued on March 31, 1981. S. Chopra, S. Dexter, Decoding Liberation – The Promise of Free and Open Source Software, Routledge, New York – London 2008. W. Chudy, “Mass Media w Perspektywie Prawdomówności i kłamstwa”, [in] Społeczeństwo Informatyczne: Szansa Czy Zagrożenie? (Ed. B. Chyrowicz), Wyd. Towarzystwo Naukowe KUL, Lublin 2003, pp. 71–101.

466

References

B. Chyrowicz, “Argumentacja We Współczesnych Debatach Bioetycznych”, [in] Etyczne i Prawne Granice Badań Naukowych (Ed. W. Galewicz), Wyd. Universitas, Kraków 2009. B. Chyrowicz, O Sytuacjach Bez Wyjścia, Wyd. Znak, Kraków 2008. R. Clarke, “Freedom and Responsibility”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010. M. Cobb, “Sexism in science: did Watson and Crick really steal Rosalind Franklin’s data? “, The Guardian, June 23, 2015, https://www.theguardian.com/science/2015/jun/23/sexism-inscience-did-watson-and-crick-really-steal-rosalind-franklins-data [2018-03-16]. D. Cocking, J. van den Hoven, Evil Online, Wiley & Sons, Oxford (UK) 2018. Code of Ethics for Engineers, National Society of Professional Engineers (USA), https://www.nspe. org/resources/ethics/code-ethics [2018-06-15]. M. Cohen, 101 Ethical Dilemmas, Routledge, London – New York 2007 (2nd edition). Conceil Pontifical pour les Comminications Sociales, Ethique en Internet, Libreria Editrice Vaticana, Città del Vaticano 2002. The Concordat to Support Research Integrity, Universities UK, 2012, http://www.universitiesuk.ac. uk/highereducation/Documents/2012/TheConcordatToSupportResearchIntegrity.pdf [2018-0608]. E. G. Conklin, “Science and Ethics”, Nature, 1938, Vol. 141, No. 3559, pp. 101–105. D. Copp (Ed.), The Oxford Handbook of Ethical Theory, Oxford University Press, Oxford 2007. D. Cox, M. La Caze, M. Levine, “Integrity”, [in] The Stanford Encyclopedia of Philosophy (Ed. E.N. Zalta), Spring 2017 Edition, https://plato.stanford.edu/cgi-bin/encyclopedia/archinfo.cgi? entry=integrity [2018-06-08]. C. Craver, J. Tabery, “Mechanisms in Science”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Spring 2017 Edition, https://plato.stanford.edu/archives/spr2017/entries/ science-mechanisms [2017-07-06]. C. F. Craver, “When Mechanistic Models Explain”, Synthese, 2006, Vol. 153, pp. 355–376. R. Crisp (Ed.), The Oxford Handbook of the History of Ethics, Oxford University Press, Oxford (UK) 2013. J. F. Cryan, T. G. Dinan, “Mind-Altering Microorganisms: The Impact of the Gut Microbiota on Brain and Behaviour”, Nature Reviews ‘Neuroscience’, October 2012, Vol. 13, pp. 701–712. D. Cyranoski, “Verdict: Hwang’s Human Stem Cells Were All Fakes”, Nature, 2006, Vol. 439, p. 122. G. E. D’Errico, “Testing for outliers based on Bayes rule”, Proc. XIX IMEKO World Congress Fundamental and Applied Metrology, pp. 2368–2370. T. E. Damer, Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Arguments, Wadswarth (Thomson Learning) Pub., Belmont 2001. M. David, “Theories of Truth”, [in] Handbook of Epistemology (Eds. I. Niiniluoto, M. Sintonen, J. Woleński), Springer, Dordrecht 2004, pp. 331–414. R. M. Davison, “Professional Ethics in Information Systems: A Personal Perspective”, Communications of AIS, 2000, Vol. 3, No. 8, pp. 1–33. R. Dawid, String Theory and the Scientific Method, Cambridge University Press, Cambridge (UK) 2013. J. de Champlain, J. Patenaude, “Review of a Mock research Protocol in Functional Neuroimaging by Canadian Research Ethics Boards”, Journal of Medical Ethics, 2006, Vol. 32, No. 9, pp. 530–534. R. De George, “Information Technology, Globalization and Ethics”, Ethics and Information Technology, 2006, Vol. 8, pp. 29–40. C. De Waal, “Foundherentism”, [in] American Philosophy: An Encyclopedia (Eds. J. Lachs, R. B. Talisse), Routledge, New York – London 2008, pp. 297–298. J. De Winter, Interests and Epistemic Integrity in Science: A New Framework to Assess Interest Influences in Scientific Research Processes, Lexington Books, London 2016.

References

467

J. Dean, “What is on a Man’s Mind?”, Mighty Optical Illusions, October 30, 2007, https://www.moil lusions.com/whats-on-mans-mind-illusion/ [2018-07-23]. Declaration of Principles on the Conduct of Journalists, adopted by the 1954 World Congress of the International Federation of Journalists; amended by the 1986 World Congress, http://www.ifj. org/about-ifj/ifj-code-of-principles/ [2018-05-13]. A. Deissler, Ich bin dein Gott, der dich befreit hat: Wege zur Meditation über das Zehngebot, Herder, Freiburg im Breisgau – Basel – Wien 1980. R. DeWitt, “Philosophies of the Sciences: A Guide”, [in] Philosophies of the Sciences (Ed. F. Allhoff), Wiley-Blackwell, Chichester (UK) 2010, pp. 9–37. P. K. Dhar, A. Giuliani, “Laws of Biology: Why So Few?”, Systems and Synthetic Biology, 2010, No. 4, pp. 7–13. R. S. Dillon, “Respect for Persons, Identity, and Information Technology”, Ethics and Information Technology, 2009, Vol. 12, No. 1, pp. 17–28. “Directive 2001/29/EC of the European Parliament and of the Council, of 22 May 2001, on the Harmonisation of Certain Aspects of Copyright and Related Rights in the Information Society”, Official Journal of the European Communities, 2001, No. L167, pp. 10–19. “Directive 2010/63/EU of the European Parliament and of the Council, of 22 September 2010, on the Protection of Animals Used for Scientific Purposes”, Official Journal of the European Communities, 2010, No. L276, pp. 33–79. D. Doliński, T. Grzyb, Posłuszni do bólu, Wyd. ‘Smak Słowa’, Sopot 2017. I. Douven, “Abduction and Inference to the Best Explanation”, [in] Encyclopedia of Philosophy and the Social Sciences (Ed. B. Kaldis), Sage, Thousand Oaks (USA) 2013, pp. 2–4. I. Douven, “Testing Inference to the Best Explanation”, Synthese, 2002, Vol. 130, pp. 355–377. A. Doyle, “Average Salary Information for US Workers”, The Balance, February 14, 2018, https:// www.thebalance.com/average-salary-information-for-us-workers-2060808 [2018-02-26]. N. R. Draper, J. A. John, “Influential Observations and Outliers in Regression”, Technometrics, 1981, Vol. 23, No. 1, pp. 21–26. M. du Vall, Prawo patentowe, Wyd. Wolters Kluwer Polska, Warszawa 2008. S. M. Dunn, A. Constantinides, P. V. Moghe, Numerical Methods in Biomedical Engineering, Elsevier Academic Press, Burlington – San Diego – London 2006. E. S. Ebert, “The Cognitive Spiral: Creative Thinking and Cognitive Processing”, Journal of Creative Behavior, 1994, Vol. 28, No. 4, pp. 275–290. D. Elgesem, “Information Technology Research Ethics”, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge (UK) 2008. G. Ellis, J. Silk, “Defend the Integrity of Physics”, Science, 2014, Vol. 516, pp. 321–323. K. S. Elmslie, “Action Potential: Ionic Mechanisms”, eLS, February 15, 2010, https://onlinelibrary. wiley.com/doi/pdf/10.1002/9780470015902.a0000002.pub2 [2018-07-22]. Ethical Decision Making, The Markkula Center for Applied Ethics, Santa Clara University, http:// www.scu.edu/ethics/practicing/decision/ [2018-01-15]. Ethics of Journalism, Parliamentary Assembly & Council of Europe, Resolution #1003 (1993), http:// assembly.coe.int/nw/xml/XRef/Xref-XML2HTML-en.asp?fileid=16414 [2018-05-13]. European Textbook on Ethics in Research – EUR 24452 EN, EC Directorate-General for Research Science, Economy and Society, Brussels 2010, https://ec.europa.eu/research/science-soci ety/document_library/pdf_06/textbook-on-ethics-report_en.pdf [2017-12-19]. A. Fagan, “Human Rights”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/hum-rts/ [2018-06-05]. D. Fanelli, “How Many Scientists Fabricate and Falsify Research? A Systematic Review and MetaAnalysis of Survey Data”, PLoS ONE, 2009, Vol. 4, No. 5, p. e5738, www.plosone.org [2011-03-07].

468

References

FEANI Code of Conduct, European Federation of National Engineering Associations, http://ethics.iit. edu/ecodes/node/6410 [2018-06-15]. T. L. J. Ferris, “A New Definition of Measurement”, Measurement, 2004, Vol. 36, No. 1, pp. 101–109. P. Feyerabend, Against Method: Outline of an Anarchist Theory of Knowledge, Verso, London – New York 1993 (first published in 1975). R. P. Feynman, “Cargo Cult Science”, Engineering and Science, June 1974, pp. 10–13. J. Fieser, “Virtues”, [in] Lecture Notes for the Course of Philosophy (Phil 300), The University of Tennessee, Martin 2017, https://www.utm.edu/staff/jfieser/class/300/virtues.htm [2017-11-30]. P. Filzmoser, K. Hron, C. Reimann, “Principal Component Analysis for Compositional Data With Outliers”, Environmetrics, 2009, Vol. 20, No. 6, pp. 621–632. L. Finkelstein, “Widely, Strongly and Weakly Defined Measurement”, Measurement, 2003, Vol. 34, pp. 39–48. W. Fisher, “Theories of Intellectual Property”, [in] New Essays in the Legal and Political Theory of Property (Ed. S. Munzer), Cambridge University Press, Cambridge (UK) 2000. P. A. Flach, “On the State of the Art in Machine Learning: A Personal Review”, Artificial Intelligence 2001, Vol. 131, pp. 199–222. Flawed Science. The Fraudulent Research Practices of Social Psychologist Diederik Stapel, Levelt Committee, Noort Committee & Drenth Committee, Tilburg University, November 2012. S. Flicker, D. Haans, H. Skinner, “Ethical Dilemmas in Research on Internet Communities”, Qualitative Health Research, 2004, Vol. 14, No. 1, pp. 124–134. L. Floridi, “Foundations of Information Ethics”, [in] The Handbook of Information and Computer Ethics (Eds. K. E. Himma, H. T. Tavani), Willey & Sons, Hoboken (USA) 2008. L. Floridi, “Information Ethics – Its Nature and Scope”, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge (UK) 2008, pp. 40–65. L. Floridi, “Semantic Conceptions of Information”, [in] The Stanford Encyclopedia of Philosophy (Ed. E.N. Zalta), Spring 2017 Edition, https://plato.stanford.edu/archives/spr2017/entries/informa tion-semantic/ [2018-06-15]. S. J. Frank, “The Death of Business-Method Patents”, IEEE Spectrum, March 2009, pp. 32–35. B. A. Fuchs, F. L. Macrina, “Ethics and the Scientist”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 19–37. B. A. Fuchs, F. L. Macrina, “Use of Animals in Biomedical Experimentation”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 127–157. L. L. Fuller, The Morality of Law, Yale University Press, New Haven – London 1969. F. Furedi, Culture of Fear Revisited, Continuum Pub., London 2006 (first published in 1997). K. Furgang, Netiquette: A Student’s Guide to Digital Etiquette, Rosen Pub., New York 2018. M. J. Garcia-Encinas, “On Singular Causality: A Definition and Defence”, Manuscript published on the internet, 2009, http://philsci-archive.pitt.edu/5246/ [2017-07-11]. E. Garfield, “Fast Science vs. Slow Science, or Slow and Steady Wins the Race”, The Scientist, 1990, Vol. 18, No. 4, pp. 380–381. A. George, “What’s Wrong With Intelligent Design, And With its Critics”, The Christian Science Monitor, December 22, 2005, https://www.csmonitor.com/2005/1222/p09s02-coop.html [2017-09-05]. E. L. Gettier, “Is Justified True Belief Knowledge?”, Analysis, 1963, Vol. 23, pp. 121–123. G. D. Gierlinski, G. Niedzwiedzki, M. G. Lockley, A. Athanassiou, C. Fassoulas, Z. Dubicka, A. Boczarowski, M. R. Bennett, P. E. Ahlberg, “Possible Hominin Footprints From the Late Miocene (c. 5.7 Ma) of Crete?”, Proceedings of the Geologists’ Association, 2017, Vol. 128, No. 5–6, pp. 697–710.

References

469

J. J. Giordano, B. Gordijn (Eds.), Scientific and Philosophical Perspectives in Neuroethics, Cambridge University Press, Cambridge (UK) 2010. S. S. Glennan, “Mechanisms and the Nature of Causation”, Erkenntnis, 1996, Vol. 44, pp. 49–71. C. Glymour, F. Eberhardt, “Hans Reichenbach”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Winter 2016 Edition, https://plato.stanford.edu/archives/win2016/entries/ reichenbach/ [2017-09-30]. J. Goćkowski, Ethos nauki i role uczonych, Wyd. ‘Secesja’, Kraków 1996. S. Godecharle, S. Fieuws, B. Nemery, K. Dierickx, “Scientists Still Behaving Badly? A Survey Within Industry and Universities”, Science and Engineering Ethics, October 2, 2017, pp. 1–21. K. F. Gödel, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I”, Monatshefte für Mathematik und Physik, 1931, Vol. 38, pp. 173–198. U. Goldenbaum, “The Geometrical Method”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), http://www.iep.utm.edu/geo-meth/ [2016-09-27]. W. J. Gonzalez, “Novelty and Continuity in Philosophy and Methodology of Science”, [in] Contemporary Perspectives in Philosophy and Methodology of Science (Eds. W. J. Gonzalez, J. Alcolea), Netbiblo, S.L., 2006, pp. 1–28. D. Goodstein, “In the Case of Robert Andrews Millikan”, American Scientist, January-February 2001, pp. 54–60, www.its.caltech.edu/~dg/MillikanII.pdf [2011-03-04]. W. J. Gordon, “Moral Philosophy, Information Technology, and Copyright – The Grokster Case”, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge (UK) 2008. M. W. Grabski, “Lepperyzacja nauki”, Zeszyty Towarzystwa Popierania i Krzewienia Nauk, 2005, No. 43, pp. 65–78. L. Grasberger, Wie die Industrie Testverfahren manipuliert, Sendung von 11. November 2017, Bayerischer Rundfunk, https://www.br.de/radio/bayern2/sendungen/iq-wissenschaft-und-for schung/abgasbetrug-und-oekoschwindel-wie-die-industrie-testverfahren-manipuliert-100.html [2018-07-23]. W. Greblicki, M. Pawlak, Nonparametric System Identification, Cambridge University Press, Cambridge (UK) 2008. A. Grobler, Metodologia nauk, Wyd. ‘Aureus’, Kraków 2006. A Guide to Teaching the Ethical Dimensions of Science, Online Ethics Center for Engineering, National Academy of Engineering, 2016, www.onlineethics.org/Education/precollege/ scienceclass/sectone.aspx [2018-06-01]. Guidelines for Scientific Record keeping in the Intramural Research Program at the NIH, Office of the Director National Institutes of Health, USA 2008, https://oir.nih.gov/sites/default/files/up loads/sourcebook/documents/ethical_conduct/guidelines-scientific_recordkeeping.pdf [2018-03-10]. J. Guillemin, “Scientists and the History of Biological Weapons”, EMBO reports, 2006, Vol. 7, pp. S45–S49. C. K. Gunsalus, A. D. Robinson, “Nine Pitfalls of Research Misconduct”, Nature, 2018, Vol. 557, pp. 297–299. J. Habermas, Moralbewußtsein und kommunikatives Handeln, Suhrkamp, Frankfurt am Main 1983. A. Hâjek, “A Philosopher’s Guide to Probability”, Manuscript published on the internet, 2008, https://openresearch-repository.anu.edu.au/handle/1885/32752 [2017-01-09]. S. Hambridge, “Netiquette Guidelines of IETF Network Working Group”, October 1995, https:// www.rfc-editor.org/info/rfc1855 [2018-05-12]. Hammurabi’s Code of Laws, 2008 (translated by L. W. King), http://eawc.evansville.edu/anthol ogy/hammurabi.htm [2017-11-25].

470

References

S. O. Hansson, “An Agenda for the Ethics of Risk”, [in] The Ethics of Technological Risk (Eds. L. Asveled, S. Roeser), Earthscan-Sterling, London 2009. S. O. Hansson, “Science and Pseudo-Science”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Summer 2017 Edition, https://plato.stanford.edu/archives/sum2017/entries/ pseudo-science/ [2017-08-10]. G. H. Harman, “The Inference to the Best Explanation”, The Philosophical Review, 1965, Vol. 74, No. 1, pp. 88–95. S. Hartmann, J. Sprenger, “Bayesian Epistemology”, Manuscript published on the internet, November 29, 2016, http://www.stephanhartmann.org/wp-content/uploads/2014/07/Hart mannSprenger_BayesEpis.pdf [2017-01-10]. Hasok Chang, “Scientific Progress: Beyond Foundationalism and Coherentism”, Royal Institute of Philosophy Supplement October 2007, Vol. 61, pp. 1–20. B. Hayes, “Reverse Engineering”, American Scientist, 2006, March-April 2006, pp. 107–111. C. G. Hempel, Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, The Free Press, New York 1965. C. G. Hempel, “Deductive-Nomological versus Statistical Explanation”, [in] Minnesota Studies in the Philosophy of Science (Eds. H. Feigl, G. Maxwell), Vol. III, University of Minnesota Press, Minneapolis 1962. C. G. Hempel, P. Oppenheim, “Studies in the Logic of Explanation”, Philosophy of Science, 1948, Vol. 15, No. 2, pp. 135–175. R. F. Hendry, “Are Realism and Instrumentalism Methodologically Indifferent?”, http://www.phil sci.org/archives/psa2000/realism-and-instnjmentalism.pdf [2017-06-03]. S. Hetherington, “Gettier Problems”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/gettier/ [2018-07-17]. T. J. Hickey, “Philosophy of Science: An Introduction”, Manuscript published on the internet, 2016, http://www.philsci.com/index.html [2016-12-16]. K. E. Himma, “Natural Law”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/natlaw/ [2018-06-05]. R. Hollingsworth, “Factors Associated With Scientific Creativity”, Euresis Journal, 2012, Vol. 2, pp. 77–112, www.euresisjournal.org [2017-10-08]. The Holy Bible Catholic Public Domain Version 2010 (translated by R. L. Conte), http://soundbible. com/book/holy-bible-pdf-download.pdf [2017-11-24]. G. Hołub, P. Duchliński, “How Philosophy can Help in Creative Thinking”, Creativity Studies, 2016, Vol. 9, No. 2, pp. 104–115. C. Howson, P. Urbach, “Bayesian versus Non-Bayesian Approaches to Confirmation”, [in] Philosophy of Probability: Contemporary Readings (Ed. A. Eagle), Routledge, Abingdon (UK) – New York 2010. C. Howson, P. Urbach, Scientific Reasoning: The Bayesian Approach, Open Court Pub., Chicago – La Salle 1989. P. Humphreys, “Computational Science and Its Effects”, [in] Science in the Context of Application (Eds. M. Carrier, A. Nordmann), Springer, Dordrecht 2011, pp. 131–142. P. Humphreys, “Models of Data and Inverse Methods”, [in] Foundations and Methods from Mathematics to Neuroscience: Essays Inspired by Patrick Suppes (Eds. C. E. Crangle, A. García de la Sienra, H. E. Longino), CSLI Pub., Stanford 2014. P. W. Humphreys, “Science, Philosophy of”, [in] Encyclopedia of Bioethics (Ed. W. T. Reich), Simon & Schuster, New York 1995, pp. 2333–2338. “Inference to the Best Explanation”, Thе Information Philosopher, http://www.informationphiloso pher.com/knowledge/best_explanation.html [2017-07-18].

References

471

Instruction ‘Dignitas Personae’ on Certain Bioethical Questions, Congregation for the Doctrine of the Faith, Vatican 2008, http://www.vatican.va/roman_curia/congregations/cfaith/ documents/rc_con_cfaith_doc_20081208_dignitas-personae_en.html [2018-02-15]. Interim Analysis of Codes of Conduct and Codes of Ethics, UNESCO Division of Ethics of Science and Technology, 2006, http://unesdoc.unesco.org/images/0014/001473/147335e.pdf [2018-0518]. The International Covenant on Civil and Political Rights (Resolution #2200A), United Nations General Assembly, 1976 (adopted in 1966), https://treaties.un.org/doc/publication/unts/ volume%20999/volume-999-i-14668-english.pdf [2018-05-14]. International Principles of Professional Ethics in Journalism, UNESCO, 1983, http://ethicnet.uta.fi/ international/international_principles_of_professional_ethics_in_journalism [2018-05-13]. International Vocabulary of Metrology – Basic and General Concepts and Associated Terms (VIM3), Joint Committee for Guides in Metrology (BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP and OIML), 2008. N. Izu, W. Shin, N. Murayama, “Numerical Analysis of Response Time for Resistive Oxygen Gas Sensor”, Sensors and Actuators B: Chemical, 2002, Vol. B87, No. 1, pp. 99–104. S. Jankowski, Z. Szymański, U. Dziomin, P. Mazurek, J. Wagner, “Deep Learning Classifier for Fall Detection Based on IR Distance Sensor Data”, Proc. IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (Warsaw, Poland, Sep. 24–26, 2015), pp. 723–727. S. Jankowski, Z. Szymański, P. Mazurek, J. Wagner, “Neural Network Classifier for Fall Detection Improved by Gram-Schmidt Variable Selection”, Proc. IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (Warsaw, Poland, Sep. 24–26, 2015), pp. 728–732. D. Jenson, “Racketeers at the Table: How the Tobacco Industry is Subverting the Public Health Purpose of Tobacco Regulation”, A Law Synopsis by the Tobacco Control Legal Consortium, November 2013, http://www.publichealthlawcenter.org/sites/default/files/resources/tclcsynopsis-racketeers-table-2013.pdf [2018-01-10]. J. D. Jobson, Applied Multivariate Data Analysis, Vol. I: Regression and Experimental Design, Springer, New York 1991. R. Johns, “Inference to the Best Explanation”, Manuscript published on the internet, 2008, http:// faculty.arts.ubc.ca/rjohns/ibe.pdf [2017-03-21]. Joint Committee for Guides in Metrology (BIPM+IEC+IFCC+ILAC+ISO+IUPAC+IUPAP+OIML), Evaluation of Measurement Data – Guide to the Expression of Uncertainty in Measurement, 2008. Joint Committee for Guides in Metrology (BIPM+IEC+IFCC+ILAC+ISO+IUPAC+IUPAP+OIML), Supplement 1 to the ‘Guide to the Expression of Uncertainty in Measurement’ – Propagation of Distributions Using a Monte Carlo Method, 2008. Joint Committee for Guides in Metrology (BIPM+IEC+IFCC+ILAC+ISO+IUPAC+IUPAP+OIML), Supplement 2 to the ‘Guide to the Expression of Uncertainty in Measurement’ – Extension to any Number of Output Quantities, 2011. N. Joli, “Contemporary Metaphilosophy”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), http://www.iep.utm.edu/con-meta/ [2017-03-25]. H. Jonas, The Imperative of Responsibility: In Search of an Ethics for the Technological Age, University of Chicago Press, Chicago 1985. M. I. Jordan, T. M. Mitchell, “Machine Learning: Trends, Perspectives, and Prospects”, Science, 2015, Vol. 349, No. 6245, pp. 255–260. H. M. Kanare, Writing the Laboratory Notebook, American Chemical Society, Washington D.C. 1985. H. Kanematsu, D. M. Barry, “Theory of Creativity”, [in] STEM and ICT Education in Intelligent Environments, Springer, Cham (Switzerland) 2016, pp. 9–13.

472

References

D. M. Kaplan, “Explanation and Description in Computational Neuroscience”, Synthese, 2011, Vol. 183, pp. 339–373. K. J. Keesman, System Identification: An Introduction, Springer, London 2011. M. Kiikeri, “Abduction, IBE and The Discovery of Kepler’s Ellipse. Explanatory Connections”, [in] Electronic Essays Dedicated to Matti Sintonen (Eds. P. Ylikoski, M. Kiikeri) 2001, http://www. helsinki.fi/teoreettinenfilosofia/henkilosto/Sintonen/Explanatory%20Essays/kiikeri.pdf [2018-09-22]. Kim Ryul, Kim Han‐Joon, Jeon Beomseok, “The Good, the Bad, and the ugly of Medical Information on the Internet”, Movement Disorders, 2018, Vol. 33, No. 5, pp. 754–757. N. S. Kinsella, Against Intellectual Property, Ludwig von Mises Institute, Auburn (USA) 2008 (first published in 2001). D. Koepsell, Scientific Integrity and Research Ethics, Springer International Pub. AG Cham (Switzerland) 2017. L. Kołakowski, “Etyka bez kodeksu”, Twórczość, 1962, Vol. 18, No. 7, pp. 64–86. L. Kołakowski, Filozofia pozytywistyczna, Wydawnictwo Naukowe PWN, Warszawa 2009. S. G. Korenman, Teaching the Responsible Conduct of Research in Humans, Office of Research Integrity, USA 2006, https://ori.hhs.gov/education/products/ucla/default.htm [2018-06-08]. D. S. Kornfeld, “Perspective: Research Misconduct The Search for a Remedy”, Academic Medicine, 2012, Vol. 87, No. 7, pp. 877–882. C. R. Kothari, Research Methodology: Methods and Techniques, New Age Int. Pub., New Delhi 2004. A. Kozbelt, R. A. Beghetto, M. A. Runco, “Theories of Creativity”, [in] The Cambridge Handbook of Creativity (Eds. J. C. Kaufman, R. J. Sternberg), Cambridge University Press, New York 2010, pp. 20–47. D. M. Krantz, R. D. Luce, P. Suppes, A. Tversky, Foundations of Measurement, Vol. 1 (Additive and Polynomial Representations), Academic Press, New York 1971. M. Krauss, E.-G. Woschni, Messinformationssysteme, VEB Verlag Technik, Berlin 1972. T. A. F. Kuipers, “Explanation by Specification”, Logique et Analyse, 1986, Vol. 29, No. 116, http:// virthost.vub.ac.be/lnaweb/ojs/index.php/LogiqueEtAnalyse/article/view/1119/925 [2017-04-01]. T. A. F. Kuipers, “Inductive Aspects of Confirmation, Information and Content”, [in] EPRINTS-BOOKTITLE, University of Groningen, 2006, pp. 855–883. H. Küng, Projekt Weltethos, Piper Verlag, München 1990. Ø. Kvalnes, Moral Reasoning at Work: Rethinking Ethics in Organizations, Palgrave Macmillan, New York 2015. G. W. Ladd, Imagination in Research, Iowa State University Press, Ames(USA, IA) 1987. H. LaFollette, The International Encyclopedia of Ethics, Vol. 1–9, Wiley & Sons, Wiley Online Library 2013–2017. H. LaFollette (Ed.), The Oxford Handbook of Practical Ethics, Oxford University Press, Oxford (UK) 2005. L. Laudan, “The Demise of the Demarcation Problem”, [in] Boston Studies in the Philosophy of Science (Eds. R. S. Cohen, L. Laudan), Vol. 76, D. Reidel, Dordrecht 1983, pp. 111–127. L. Laudan, “The Epistemic, the Cognitive, and the Social”, [in] Science, Values, and Objectivity (Eds. P. Machamer, G. Wolters), University of Pittsburgh Press, Pittsburgh 2004, pp. 14–23. T. Le Texier, Histoire d’un mensonge: Enquête sur l’expérience de Stanford, La Découverte, Paris 2018. T. Le Texier, (interviewé par G. Salle), “Anatomie d’une Fraude Scientifique: l’expérience De Stanford”, Contretemps, 28 avril 2018, https://www.contretemps.eu/entretien-texierexperience-stanford/ [2018-07-05]. P. Lewis, “Wisdom as Seen Through Scientific Lenses: A Selective Survey of Research in Psychology and the Neurosciences”, Tradition & Discovery: The Polanyi Society Periodical, 2009–2010, Vol. 36, No. 2, pp. 67–72, [2016-08-19].

References

473

J. Lexchin, L. A. Bero, B. Djulbegovic, O. Clark, “Pharmaceutical Industry Sponsorship and Research Outcome and Quality: Systematic Review”, British Medical Journal (BMJ), 2003, Vol. 326, pp. 1167–1170. B. Lightbody, “Virtue Foundherentism”, Kriterion, 2006, No. 20, pp. 14–21. J. E. Linares, Etica y mundo technológico, Fondo de Cultura Económica, Mexico 2008. P. Lipton, “Inference to the Best Explanation”, [in] Companion to the Philosophy of Science (Ed. W. H. Newton-Smith), Blackwell, Oxford (UK) 2000, pp. 184–193. M. Liston, “Scientific Realism and Antirealism”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/sci-real/ [2018-07-20]. R. Lleti, E. Meléndez, M. C. Ortiz, L. A. Sarabia, M. S. Sánchez, “Outliers in Partial Least Squares Regression – Application to Calibration of Wine Grade with Mean Infrared Data”, Analytica Chimica Acta, 2005, No. 544, pp. 60–70. T. Lockhart, Moral Uncertainty and its Consequences, Oxford University Press, New York – Oxford (UK) 2000. T. Lockhart, “Sci-Hub: Stealing Intellectual Property or Ensuring Fairer Access?”, SIAM News, January 17, 2017, https://sinews.siam.org/Details-Page/sci-hub-stealing-intellectual-propertyor-ensuring-fairer-access [2018-06-14]. J. Logan, “The Shame of Psychology”, The UCSB Current, June 23, 2015, http://www.news.ucsb. edu/2015/015523/shame-psychology [2018-07-22]. K. E. Løgstrup, The Ethical Demand, University of Notre Dame Press, Notre Dame (USA) 1997 (translated from Danish by H. Fink). T. Lombrozo, “Explanation and Abductive Inference”, [in] The Oxford Handbook of Thinking and Reasoning (Eds. K. J. Holyoak, R. G. Morrison), Oxford University Press, Oxford (UK) 2012, pp. 260–276. R. D. Luce, D. M. Krantz, P. Suppes, A. Tversky, Foundations of Measurement, Vol.3 (Representation, Axiomatization, and Invariance), Academic Press, New York 1990. A. Lundh, J. Lexchin, B. Mintzes, J. B. Schroll, L. Bero, “Industry Sponsorship and Research Outcome”, Cochrane Database of Systematic Reviews, 2017, No. 2, http://dx.doi.org/10.1002/ 14651858.MR000033.pub3 [2017-09-27]. R. Luppicini, “The Emerging Field of Technoethics”, [in] Handbook of Research on Technoethics (Eds. R. Adell, R. Luppicini), IGI Global – Information Science Reference, Hershey – New York 2009. A. MacFarlane, “The Dimensions and Dialectics of Creativity”, Euresis Journal, 2012, Vol. 2, pp. 53–76, www.euresisjournal.org [2017-10-08]. B. Macfarlane, Researching with Integrity: The Ethics of Academic Enquiry, Taylor & Francis, New York – London 2009. P. Machamer, “A Brief Historical Introduction to the Philosophy of Science”, [in] The Blackwell Guide to the Philosophy of Science (Eds. P. Machamer, M. Silberstein), Blackwell, Malden (USA) – Oxford (UK) 2002, p. Chapter 1. F. L. Macrina, “Authorship and Peer Review”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 61–90. F. L. Macrina, “Methods, Manners, and the Responsible Conduct of Research”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 1–18. F. L. Macrina, Scientific Integrity: Text and Cases in Responsible Conduct of Research, ASM Press, Washington D.C. 2014 (4th edition). F. L. Macrina, “Scientific Record Keeping”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 269–296. C. Mantzavinos, Explanatory Pluralism, Cambridge University Press, Cambridge (UK) 2016. H. Marchand, “An Overview of the Psychology of Wisdom”, 2002, http://www.wisdompage.com/ AnOverviewOfThePsychologyOfWisdom.html [2016-08-19].

474

References

W. Marciszewski, Dictionary of Logic as Applied in the Study of Language: Concepts/Methods/ Theories, Springer, Dordrecht 1981. L. Mari, V. Lazzarotti, R. Manzini, “Measurement in Soft Systems: Epistemological Framework and a Case Study”, Measurement, 2009, Vol. 42, pp. 241–253. D. Marian, “The Correspondence Theory of Truth”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Fall 2016 Edition, https://plato.stanford.edu/archives/fall2016/entries/truthcorrespondence [2018-07-14]. I. Markovsky, S. van Huffel, “Overview of Total Least-Squares Methods”, Signal Processing, 2007, Vol. 87, pp. 2283–2302. B. Martin, “Against Intellectual Property”, Philophy and Social Action, 1995, Vol. 21, No. 3, pp. 7–22. M. W. Martin, R. Schinzinger, Introduction to Engineering Ethics, McGraw Hill, New York 2010 (2nd edition). A. H. Maslow, “A Theory of Human Motivation”, Psychological Review, 1943, Vol. 50, No. 4, pp. 370–396. G. R. Mayes, “Theories of Explanation”, Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), http://www.iep.utm.edu/explanat/ [2017-03-03]. T. D. Mays, “Ownership of Data and Intellectual Property”, [in] Scientific Integrity (Ed. F. L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 211–245. P. Mazurek, A. Miękina, R. Z. Morawski, “Comparative Study of Three Algorithms for Estimation of Echo Parameters in UWB Radar Module for Monitoring of Human Movements”, Measurement, 2016, Vol. 88, pp. 45–57. P. Mazurek, J. Wagner, R. Z. Morawski, “Use of kinematic and Mel-Cepstrum-Related Features for Fall Detection Based on Data from Infrared Depth Sensors”, Biomedical Signal Processing and Control, 2018, Vol. 40, pp. 102–110. W. H. B. McAuliffe, “How did Abduction Get Confused with Inference to the Best Explanation?”, Transactions of the Charles S. Peirce Society, 2015, Vol. 51, No. 3, pp. 300–319. R. J. McNally, “Is the Pseudoscience Concept Useful For Clinical Psychology?”, The Scientific Review of Mental Health Practice, 2003, Vol. 2, No. 2, http://www.srmhp.org/0202/pseudoscience. html [2018-07-23]. T. Meissner, Moses, hol die Tafeln ab!, Kreuz Verlag, Stuttgart 1993. R. P. Merges, Justifying Intellectual Property, Harvard University Press, Cambridge (MA) & London 2011. K. Merten, “Public Relations – die Lizenz zu Täuschen?”, PR Journal, November 2004, http://www. pr-journal.de/redaktion-archiv/6396-klaus-merten-public-relations-die-lizenz-zu-tchen.html [2010-07-07]. K. Merten, S. Risse, “Ethik der PR: Ethik oder PR für PR”, September 2009, http://www.pr-journal. de/images/stories/downloads/merten%20ethik.pr.09.03.2009.pdf [2018-07-07]. R. K. Merton, “The Normative Structure of Science”, [in] The Sociology of Science: Theoretical and Empirical Investigations (Ed. R. K. Merton), University of Chicago Press, Chicago 1973 (first published in 1942). M. Meyer, Qu’est-ce que le questionnement, Editions Vrin, Paris 2017. S. Milgram, Obedience to Authority: An Experimental View, Harper & Row, New York 1974. S. Miller, “Collective Responsibility and Information and Communication Technology”, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge (UK) 2008. T. M. Mitchell, “The Discipline of Machine Learning”, Tom Mitchell’s website, 2006, http://www.cs. cmu.edu/~tom/pubs/MachineLearning.pdf [2017-06-02]. J. H. Moor, “What is Computer Ethics?”, Metaphilosophy, 1985, Vol. 16, No. 4, pp. 266–275.

References

475

A. D. Moore, “Personality-based, Rule-utilitarian, and Lockean Justifications of Intellectual Property”, [in] The Handbook of Information and Computer Ethics (Eds. K. E. Himma, H. T. Tavani), Wiley & Sons, Hoboken (USA) 2008, pp. 105–130. J. A. Morales, J. F. Cassidy, “Model of the Response of an Optical Sensor Based on Absorbance Measurement to HCl”, Sensors and Actuators B: Chemical, 2003, Vol. B 92, No. 3, pp. 345–350. R. Z. Morawski, “An Application-Oriented Mathematical Meta-Model of Measurement”, Measurement, 2013, Vol. 46, pp. 3753–3765. R. Z. Morawski, Badanie zależności górnej częstotliwości granicznej pierścieniowej piątki liczącej od pobieranej mocy zasilania, Instytut Radioelektroniki PW, Warszawa 1972. R. Z. Morawski, “Ethical Aspects of Empirical Research”, Proc. 5th Congress of Metrology (Łódź, Poland, September 6–8, 2010), pp. 28–33. R. Z. Morawski, A. Barwicz, M. Ben Slima, A. Miękina, Method of Interpreting Spectrometric Data, US Patent Office Patent #5,991,023, issued on November 23, 1999. R. Z. Morawski, C. Niedziński, “Application of a Bayesian Estimator for Identification of Edible Oils on the Basis of Spectrophotometric Data”, Metrology and Measurement Systems, 2008, Vol. 15, No. 3, pp. 247–266. R. Z. Morawski, A. Podgórski, “Choosing the Criterion for Dynamic Calibration of a Measuring System”, Proc. 5th IMEKO TC4 Int. Symposium (Vienna, Austria, April 8–10, 1992), pp. III.43–50. R. Z. Morawski, A. Podgórski, M. Urban, “Variational Algorithms of Dynamic Calibration Based on Criteria Defined in Measurand Domain”, Proc. IMEKO TC1-TC7 Colloquium (London, UK, September 8–10, 1993), pp. 311–316. T. Morris, Hans Jonas’s Ethic of Responsibility: From Ontology to Ecology, State University of New York Press, Albany 2013. “Most Common Reasons for Journal Rejection”, Editage Insights, November 12, 2013, https://www. editage.com/insights/most-common-reasons-for-journal-rejection [2018-05-05]. C. L. Munro, “Genetic Technology and Scientific Integrity”, [in] Scientific Integrity (Ed. F.L. Macrina), ASM Press, Washington D.C. 2005 (3rd edition), pp. 247–268. B. Munroe, “Borrowed Rules”, IEEE – The Institute, 2006, Vol. 30, No. 4, p. 5. P. Murphy, “Coherentism in Epistemology”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), http://www.iep.utm.edu/coherent/ [2017-09-17]. A. B. Murray, “Contrasting the Goals, Strategies, and Predictions Associated With Simplified Numerical Models and Detailed Simulations”, Geophysical Monograph ‘Prediction in Geomorphology’, 2003, Vol. 135, pp. 1–15. C. Murray, Human Accomplishment, HarperCollins, New York 2003. H. Mustajoki, A. Mustajoki, A New Approach to Research Ethics: Using Guided Dialogue to Strengthen Research Communities, Routledge, Abingdon (UK) 2017. D. Narvaez, “Wisdom as Mature Moral Functioning: Insights From Developmental Psychology and Neurobiology”, [in] Character, Practical Wisdom and Professional Formation across the Disciplines (Eds. M. Jones, P. Lewis, K. Reffitt), Mercer University Press, Macon (USA) 2013. S. Nathanson, “Act and Rule Utilitarianism”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/util-a-r/ [2018-06-06]. O. Nelles, Nonlinear System Identification: From Classical Approaches to Neural Networks and Fuzzy Models, Springer, Berlin – Heidelberg 2001. P. Nemo, Qu’est-ce que l’Occident ?, Presses universitaires de France, Paris 2013. H. M. Ngô, Y. Zhou, H. Lorenzi, K. Wang, T.-K. Kim, Y. Zhou, K. E. Bissati, E. Mui, L. Fraczek, S. V. Rajagopala, C. W. Roberts, F. L. Henriquez, A. Montpetit, J. M. Blackwell, S. E. Jamieson, K. Wheeler, I. J. Begeman, C. Naranjo-Galvis, N. Alliey-Rodriguez, R. G. Davis, L. Soroceanu, C. Cobbs, D. A. Steindler, K. Boyer, A. G. Noble, C. N. Swisher, P. T. Heydemann, P. Rabiah, S. Withers, P. Soteropoulos, L. Hood, R. McLeod, “Toxoplasma Modulates Signature Pathways

476

References

of Human Epilepsy, Neurodegeneration & Cancer”, Scientific Reports, 2017, Vol. 7, No. 1, pp. 1–32 of Art #11496,https://doi.org/10.1038/s41598-017-10675-6 [2018-01-14]. T. Nickles, “The Problem of Demarcation: History and Future”, [in] Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Eds. M. Pigliucci, M. Boudry), University of Chicago Press, Chicago 2013, pp. 101–120. C. Niedziński, R. Z. Morawski, “Estimation of low Concentrations in Presence of High Concentrations Using Bayesian Algorithms for Interpretation of Spectrophotometric Data”, Journal of Chemometrics, 2004, Vol. 18, pp. 217–230. I. Niiniluoto, “Abduction, Tomography, and Other Inverse Problems”, Studies in History and Philosophy of Science, 2011, Vol. 42, pp. 135–139. I. Niiniluoto, Critical Scientific Realism, Clarendon Press, Oxford 1999. I. Niiniluoto, “Optimistic Realism About Scientific Progress”, Synthese, 2017, Vol. 194, No. 9, pp. 3291–3309. I. Niiniluoto, “Peirce, Abduction and Scientific Realism”, [in] Ideas in Action: Proceedings of the Applying Peirce Conference, Nordic Studies in Pragmatism 1 (Eds. M. Bergman, S. Paavola, A.-V. Pietarinen, H. Rydenfelt), Nordic Pragmatism Network, Helsinki 2010, pp. 252–263. “The Nuremberg Code”, Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law, 1949, Vol. 2, No. 10, pp. 181–182. S. Y. Nussbeck, P. Weil, J. Menzel, B. Marzec, K. Lorberg, B. Schwappach, “The Laboratory Notebook in the 21st Century”, EMBO Reports, 2014, Vol. 15, No. 6, pp. 631–634. C. H. O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Pub., New York 2016. O. O’Neill, A Question of Trust: The BBC Reith Lectures 2002, Cambridge University Press, Cambridge (UK) 2002. “Federal Research Misconduct Policy”, Federal Register, Office of Science and Technology Policy, December 6, 2000, Vol. 65, No. 235, pp. 76260–76264. E. Olsson, “Coherentist Theories of Epistemic Justification”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Spring 2017 Edition, https://plato.stanford.edu/archives/ spr2017/entries/justep-coherence/ [2017-09-18]. On Being a Scientist: Responsible Conduct in Research, Committee on Science, Engineering, and Public Policy (appointed by National Academy of Sciences, National Academy of Engineering, and Institute of Medicine), Washington D.C. 2009, http://www.nap.edu/catalog/12192.html [2010-05-12]. Open Science Collaboration, “Estimating the Reproducibility of Psychological Science”, Science, 2015, Vol. 349, No. 6251, pp. 943–951. M. Oppezzo, D. L. Schwartz, “Give Your Ideas Some Legs: The Positive Effect of Walking on Creative Thinking”, Journal of Experimental Psychology: Learning, Memory, and Cognition, 2014, Vol. 40, No. 4, pp. 1142–1152. M. Pacer, J. Williams, Xi Chen, T. Lombrozo, T. L. Griffiths, “Evaluating Computational Models of Explanation Using Human Judgments”, Proc. Conference on Uncertainty in Artificial Intelligence (Bellevue, Washington, USA, July 11–15, 2013). A. Parlog, D. Schlüter, I. R. Dunay, “Toxoplasma Gondii-Induced Neuronal Alterations”, Parasite Immunology, 2015, Vol. 37, pp. 159–170. C. S. Peirce, “Deduction, Induction, And Hypothesis”, The Popular Science Monthly, August 1878, pp. 470–482. R. Perrin, Max Scheler’s Concept of the Person: An Ethics Of Humanism, Palgrave Macmillan, New York 1991. P. Pettit, “Trust, Reliance, and the Internet “, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge (UK) 2008.

References

477

M. Pexton, Non-Causal Explanation in Science Models and Modalities: a Manipulationist Account, Ph.D. Thesis, School of Philosophy, Religion & the History of Science, University of Leeds, Leeds 2013. J. Pienaar, “Viewpoint: Causality in the Quantum World”, Physics, 2017, Vol. 10, No. 86, https:// physics.aps.org/articles/v10/86 [2018-07-22]. M. Pigliucci, “Must Science be Testable?”, Website ‘Aeon’, August 10, 2016, https://aeon.co/ essays/the-string-theory-wars-show-us-how-science-needs-philosophy [2018-05-20]. M. Pigliucci, M. Boudry (Eds.), Philosophy of Pseudoscience: Reconsidering the Demarcation Problem, University of Chicago Press, Chicago – London 2013. S. Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, Viking Pub., New York 2018. R. Pintelon, J. Schoukens, System Identification: A Frequency Domain Approach, Wiley & Sons, Hoboken (USA) 1991. A. Plakias, “Experimental Philosophy”, Oxford Handbooks Online, 2015, http://www. oxfordhandbooks.com/view/10.1093/oxfordhb/9780199935314.001.0001/oxfordhb9780199935314-e-17 [2017-03-27]. K. R. Popper, In Search of a Better World: Lectures and Essays from Thirty Years, Routledge, London 1994. N. Postman, Technopoly: The Surrender of Culture to Technology, Vintage Books, New York 1993. A. Potochnik, “Causal Patterns and Adequate Explanations”, Philosophical Studies, 2015, Vol. 172, No. 5, pp. 1163–1182. D. Pritchard, What is This Thing Called Knowledge?, Taylor & Francis, London – New York 2010. P. Pruzan, Research Methodology: The Aims, Practices and Ethics of Science, Springer, Cham (Switzerland) 2016. S. Psillos, Philosophy of Science A–Z, Edinburgh University Press, Edinburgh 2007. R. Q. Quiroga, The Forgetting Machine: Memory, Perception, and the Jennifer Aniston Neuron, BenBella Books, Dallas (USA) 2017. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals, International Committee of Medical Journal Editors (ICMJE), December 2017 (update), http://www.icmje.org/icmje-recommendations.pdf [2018-04-03]. “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on The Free Movement of Such Data”, Official Journal of the European Union, 2016, No. L119, pp. 1–88. H. Reichenbach, Wahrscheinlichkeitslehre. Eine Untersuchung über die Logischen und Mathematischen Grundlagen der Wahrscheinlichkeitsrechnung, Uitgeverij Sijthoff Leiden 1935. Report of the Investigation Committee on the Possibility of Scientific Misconduct in The Work of Hendrik Schön and Coauthors, Bell Labs, September 2002, https://media-bell-labs-com.s3. amazonaws.com/pages/20170403_1709/misconduct-revew-report-lucent.pdf [2018-06-15]. Report on Intellectual and Industrial Property, Economic Council of Canada, Information Canada, Ottawa January 1971. D. B. Resnik, The Price of Truth – How Money Affects the Norms of Science, Oxford University Press, Oxford 2007. D. B. Resnik, T. Neal, A. Raymond, G. E. Kissling, “Research Misconduct Definitions Adopted by U.S. Research Institutions: Introduction”, Accountability in Research, 2015, Vol. 22, No. 1, pp. 14–21. Revised Field of Science and Technology (FOS) Classification in the Frascati Manual, OECD Working Party of National Experts on Science and Technology Indicators, February 26, 2007, http:// www.oecd.org/sti/inno/38235147.pdf [2018-05-24]. A. Rey, “Defining Definition”, [in] Essays on Definition (Ed. J.C. Sager), John Benjamins Pub., Amsterdam – Philadelphia 2000, pp. 1–14.

478

References

A. H. Rinaldi, “The Net User Guidelines and Netiquette”, September 3, 1992, http://www.shentel. net/general/tiquette.html [2018-05-12]. S. Robertson, “Reasons, Values, and Morality”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010. K. Rogers, “Scientific Modeling”, [in] Encyclopædia Britannica, Encyclopædia Britannica, Inc., Chicago 2017, https://www.britannica.com/science/scientific-modeling [2017-06-01]. A. Rosenberg, Philosophy of Science – A Contemporary Introduction, Routledge, New York – Abingdon (UK) 2005. H. Ruh, T. Gröbly, Die Zukunft ist ethisch – oder gar nich, Waldgut Verlag, Frauenfeld 2008. H. Ruonavaara, “Deconstructing Explanation by Mechanism”, Sociological Research Online, 2012, Vol. 17, No. 2, pp. 7.1–7.16. S. Ryan, “Wisdom”, [in] The Stanford Encyclopedia of Philosophy (Ed. E.N. Zalta), Summer 2018 Edition, https://plato.stanford.edu/archives/sum2018/entries/wisdom/ [2018-08-02]. J. Rydell, Advanced MRI Data Processing, Vol. 1140 (of the series Dissertations), Linköping Studies in Science and Technology, Linköpings Universitet, Department of Biomedical Engineering, Linköping 2007. J. C. Sager (Ed.), Essays on Definition, John Benjamins Pub., Amsterdam & Philadelphia 2000. W. C. Salmon, “Scientific Explanation”, [in] Introduction to the Philosophy of Science: A Text by Members of the Department of the History and Philosophy of Science of the University of Pittsburgh (Eds. W. H. Salmon, et al.), Prentice Hall, Englewood Cliffs (USA) 1992, pp. 7–41. W. C. Salmon, Four Decades of Scientific Explanation, University of Minnesota Press, Minneapolis 1989, http://mcps.umn.edu/philosophy/13_0Salmon.pdf [2017-02-28]. L. Sandu, J. Oksman, F. G.A., “Information Criteria for the Choice of Parametric Functions for Measurement”, IEEE Transactions on Instrumentation and Measurement, 1998, Vol. 47, No. 4, pp. 920–924. J. Savulescu, T. Hope, “The Ethics of Research”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010. C. A. Scharf, “In Defense of Metaphors in Science Writing”, Scientific American, July 9, 2013. J. Schickore, “Scientific Discovery”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Spring 2014 Edition, https://plato.stanford.edu/archives/spr2014/entries/scientificdiscovery/ [2017-09-30]. A. Schopenhauer, The Art of Being Right (translated from German by T. B. Saunders in 1896), https://en.wikisource.org/wiki/Author:Thomas_Bailey_Saunders [2018-03-20]. A. Schopenhauer, Eristische Dialektik oder Die Kunst, Recht zu behalten, Megaphone eBooks, 2008, http://www.wendelberger.com/downloads/Schopenhauer_DE.pdf [2018-10-04]. E. Schwitzgebel, “Belief”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Summer 2015 Edition, https://plato.stanford.edu/archives/sum2015/entries/belief [2018-07-15]. S. Scott, “Martin Buber (1878–1965)”, [in] Internet Encyclopedia of Philosophy (Eds. J. Fieser, B. Dowden), https://www.iep.utm.edu/buber/. M. Scriven, “Causation as Explanation”, Noûs, 1975, Vol. 9, No. 1, pp. 3–16. L. A. Seneca, Epistulae Morales (Letters), Harvard University Press, London 1917 (translated from Latin by R. M. Gummere), https://ryanfb.github.io/loebolus-data/L075.pdf [2017-12-27]. L. A. Seneca, Moral Essays, Harvard University Press, Cambridge (USA) 1928 (translated from Latin by J. W. Basore), https://ryanfb.github.io/loebolus-data/L214.pdf [2017-12-27]. M. Serban, Towards Explanatory Pluralism in Cognitive Science, Ph.D. Thesis, School of Philosophy, University of East Anglia, Norwich 2014. A. E. Shamoo, D. B. Resnik, Responsible Conduct of Research, Oxford University Press, New York 2015 (3rd edition).

References

479

C. E. Shannon, “A Mathematical Theory of Communication”, The Bell System Technical Journal, 1948, Vol. 27, pp. 379–423 and 623–656. V. Shea, Netiquette, Albion Books 2004, http://www.albion.com/netiquette/book/index.html [2018-07-07]. R. Silva, “Causality”, [in] Encyclopedia of Machine Learning and Data Mining (Eds. C. Sammut, G. I. Webb), Springer, New York 2017. P. J. Silvia, J. C. Kaufman, “Creativity and Mental Illness”, [in] The Cambridge Handbook of Creativity (Eds. J. C. Kaufman, R. J. Sternberg), Cambridge University Press, New York 2010, pp. 381–394. P. Singer, Animal Liberation: A New Ethics for Our Treatment of Animals, HarperCollins, New York 1975. “Six Causal Patterns”, [in] Causal Patterns in Science, Harvard Graduate School of Education, 2008, https://www.cfa.harvard.edu/smg/Website/UCP/pdfs/SixCausalPatterns.pdf [2017-07-08]. N. G. Skene, M. Roy, S. G. N. Grant, “A Genomic Lifespan Program That Reorganises the Young Adult Brain is Targeted in Schizophrenia”, eLife, 2017, Vol. 6, No. e17915, pp. 1–30. G. R. Skoll, Globalization of American Fear Culture, Palgrave Macmillan, London 2016. B. Skow, “Scientific Explanation “, [in] The Oxford Handbook of Philosophy of Science (Ed. P. Humphreys), Oxford University Press, Oxford (UK) – New York (USA) 2016, Chapter 25. Slow Science Manifesto, Slow Science Academy, Berlin, Germany 2010, http://slow-science.org/ [2018-05-20]. P. E. Smaldino, R. McElreath, “The Natural Selection of Bad Science”, Royal Society Open Science, September 21, 2016, http://rsos.royalsocietypublishing.org/content/3/9/160384 [2018-04-16]. A. Sokal, J. Bricmont, Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science, Picador Pub., New York 1998. A. D. Sokal, “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity”, Social Text, 1996, Vol. 46/47, pp. 217–252. O. Solon, O. Laugbland, “Cambridge Analytica Closing After Facebook Data Harvesting Scandal”, The Guardian, May 2, 2018, https://www.theguardian.com/uk-news/2018/may/02/ cambridge-analytica-closing-down-after-facebook-row-reports-say [2018-05-03]. P. Sorokowski, E. Kulczycki, A. Sorokowska, K. Pisanski, “Predatory Journals Recruit Fake Editor”, Nature, March 2017, Vol. 543, pp. 481–483. E. H. Spence, “A Universal Model for the Normative Evaluation of Internet Information”, Ethics and Information Technology, 2009, Vol. 11, No. 4, pp. 243–253. B. C. Stahl, Information Systems – Critical Perspectives, Routledge, London – New York 2008. D. A. Stapel, S. Lindenberg, “Coping with Chaos: How Disordered Contexts Promote Stereotyping and Discrimination”, Science, 2011, Vol. 332, No. 6026, pp. 251–253. U. M. Staudinger, J. Glueck, “Psychological Wisdom Research: Commonalities and Differences in a Growing Field”, Annual Review of Psychology, 2011, Vol. 62, pp. 215–241. I. Stengers, Another Science is Possible: A Manifesto for Slow Science, Polity Press, Cambridge – Medford (UK) 2018 (translated from French by S. Muecke). R. J. Sternberg, “The Nature of Creativity”, Creativity Research Journal, 2006, Vol. 18, No. 1, pp. 87–98. C. N. Stewart, Research Ethics for Scientists: A Companion for Students, Wiley & Sons, Chichester (UK) 2011. C. E. Stinson, Cognitive Mechanisms and Computational Models: Explanation in Cognitive Neuroscience, Ph.D. Thesis, School of Arts and Sciences, University of Pittsburgh, Pittsburgh 2013. J. Stöckle, “Improvement of the Dynamical Properties of Checkweighers Using Digital Signal Processing”, Proc. Int. IMEKO TC7 Symposium on Measurement & Estimation (Bressanone, Italy, May 8–12, 1984), pp. 103–108.

480

References

M. Strevens, “No Understanding Without Explanation”, Studies in History and Philosophy of Science, 2013, Vol. 44, pp. 510–515. A. Strugarek, P. Beaudoin, P. Charbonneau, A. S. Brun, J.-D. do Nascimento, “Reconciling Solar and Stellar Magnetic Cycles with Nonlinear Dynamo Simulations”, Science, 2017, Vol. 357, No. 6347, pp. 185–187. T. Styczeń, “Czy istnieje etyka dla naukowca?”, Ethos, 1998, No. 44, pp. 75–83. P. Suppes, “A Set of Independent Axioms for Extensive Quantities”, Portugaliae Mathematica, 1951, Vol. 10, pp. 163–172. P. Suppes, D. M. Krantz, R. D. Luce, A. Tversky, Foundations of Measurement, Vol. 2 (Geometrical, Threshold, and Probabilistic Representations), Academic Press, New York 1989. L. M. Surhone, M. T. Tennoe, S. F. Henssonow (Eds.), Eristic: Dialogue, Eris (mythology), Argument, Truth, Betascript Pub., Riga 2010. W. H. Swanson, Swanson’s Unwritten Rules of Management, Raytheon Co., Waltham (USA) 2004. P. H. Sydenham, R. Thorn (Eds.), Handbook of Measuring System Design, Wiley & Sons, Hoboken (USA) 2005. T. Szafranski, P. Sprzeczak, R. Z. Morawski, “An Algorithm for Spectrmetric Data Correction with Built-in Estimation of Uncertainty”, XVIth IMEKO World Congress (Vienna, Austria, September 25–28, 2000). T. Szafrański, R. Z. Morawski, “Accuracy of Measurand Reconstruction – Comparison of Four Methods of Analysis”, Proc. IEEE Instrumentation and Measurement Technology Conference – IMTC98 (St. Paul, MN, USA, May 18–21,1998), pp. 32–35. T. Szafrański, R. Z. Morawski, “Dealing with Overestimation of Uncertainty in Algorithms of Mesurand Reconstruction”, Proc. XVth IMEKO World Congress (Osaka, Japan, 13–18 June 1999). T. Szafrański, R. Z. Morawski, “Efficient Estimation of Uncertainty in Weakly Non-Linear Algorithms for Measurand Reconstruction”, Measurement, 2001, Vol. 29, pp. 77–85. Z. Szawarski, “Moral Uncertainty and Teaching Ethics”, Proc. COMEST Third Session (Rio de Janeiro, Brazil, December 1–4, 2003), pp. 80–84. K. Szymanek, Sztuka argumentacji – Słownik terminologiczny, Wyd. Nauk. PWN, Warszawa 2004. E. Tal, “Measurement in Science”, [in] The Stanford Encyclopedia of Philosophy (Ed. E. N. Zalta), Spring 2017 Edition, https://plato.stanford.edu/archives/spr2017/entries/measurementscience/ [2017-06-07]. L. Tang, Y. Zeng, H. Du, M. Gong, J. Peng, B. Zhang, M. Lei, F. Zhao, W. Wang, X. Li, J. Liu, “CRISPR/ Cas9-Mediated Gene Editing in Human Zygotes Using Cas9 Protein”, Molecular Genetics and Genomics, 2017, Vol. 292, No. 3, pp. 525–533. W. Tatarkiewicz, O doskonałości, Państwowe Wydawnictwo Naukowe, Warszawa 1976. H. T. Tavani (Ed.), Ethical Issues in an Age of Information and Communication Technology, Wiley & Sons, Hoboken (USA) 2007. C. Tenopir, R. Mays, L. Wu, “Journal Article Growth and Reading Patterns”, New Review of Information Networking, 2011, Vol. 16, No. 1, pp. 4–22. P. R. Thagard, “Why Astrology is a Pseudoscience”, Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1978, Vol. 1, pp. 223–234. The Charter of Fundamental Rights of the European Union, European Communities, 2000, http:// www.europarl.europa.eu/charter/default_en.htm [2018-02-14]. W. Thomson, Popular Lectures and Addresses – Vol. I., MacMillan & Co., London – New York 1889, https://archive.org/details/popularlecturesa01kelvuoft [2017-06-25]. C. W. Tindale, Fallacies and Argument Appraisal, Cambridge University Press, Cambridge (UK) 2007. R. Turner, “A Model Explanation System”, Proc. 2016 IEEE Int. Workshop on Machine Learning for Signal Processing (Salerno, Italy, September 13–16, 2016), pp. 1–6. J. Turri, P. D. Klein (Eds.), Ad Infinitum: New Essays on Epistemological Infinitism, OUP Oxford, 2014.

References

481

G. Tuzet, “Legal Abduction”, Cognitio, 2005, Vol. 8, No. 2, pp. 265–284. Understanding Industrial Property, World Intellectual Property Organization, Geneva 2016 (2nd edition), http://www.wipo.int/edocs/pubdocs/en/wipo_pub_895_2016.pdf [2018-04-09]. S. Uniacke, “Responsibility – Intention and Consequence”, [in] The Routledge Companion to Ethics (Ed. J. Skorupski), Routledge, London – New York 2010. Universal Declaration of Human Rights (Resolution #217), United Nations General Assembly, 2015 (adopted in 1948), http://www.un.org/en/udhrbook/pdf/udhr_booklet_en_web.pdf [2018-0514]. J. van den Hoven, “Moral Methodology and Information Technology”, [in] The Handbook of Information and Computer Ethics (Eds. K.E. Himma, H.T. Tavani), Wiley & Sons, Hoboken (USA) 2008, pp. 49–67. J. van den Hoven, E. Rooksby, “Distributive Justice and the Value of Information – A (Broadly) Rawlsian Approach”, [in] Information Technology and Moral Philosophy (Eds. J. van den Hoven, J. Weckert), Cambridge University Press, Cambridge 2008. F. H. van Eemeren, E. C. W. Krabbe, A. F. S. Henkemans, J. H. M. Wagemans, Handbook of Argumentation Theory, Springer, Dordrecht 2014. B. C. van Fraassen, The Scientific Image, Oxford University Press, Oxford (UK) 1980. B. C. van Fraassen, Scientific Representation: Paradoxes of Perspective, Oxford University Press, Oxford 2008. A. Vedder, “Responsibilities for Information on the Internet”, [in] The Handbook of Information and Computer Ethics (Eds. K. E. Himma, H. T. Tavani), John Wiley & Sons, New Jersey 2008, pp. 339–359. S. VenuKapalli, “The Genesis of a Hypothesis: Did Hanson Win the Battle and Lost the War?”, IOSR Journal of Research & Method in Education, 2013, Vol. 1, No. 2, pp. 48–51. W. Verrusio, F. Moscucci, M. Cacciafesta, N. Gueli, “Mozart Effect and its Clinical Applications: A Review”, British Journal of Medicine & Medical Research, 2015, Vol. 8, No. 8, pp. 639–650. P. Vignal, “Un professeur de chimie jugé pour explosion à Mulhouse”, l’express.fr, No. 2010.09.15, http://www.lexpress.fr/actualites/2/un-professeur-de-chimie-juge-pour-explosion-amulhouse_919637.html [2011-03-03]. J. Virapen, Side Effects: Confessions of a Pharma-Insider: Death, Virtualbookworm.com Pub., College Station (USA) 2010. “Vizualizing Change – The Annual Report on the Economic Status of the Profession, 2016-17”, Academe, March-April 2017, pp. 4–67, https://www.aaup.org/file/FCS_2016-17.pdf [2018-02-26]. H. von Helmholtz, “Zählen und Messen, erkenntnistheoretisch betrachtet”, [in] Philosophische Aufsätze, Eduard Zeller zu seinem fünfzigjährigen Doctorjubiläum gewidmet, Fues’ Verlag, Leipzig 1887, pp. 17-52. J. von Neumann, “Method in the Physical Sciences”, [in] The Unity of Knowledge (Ed. L. G. Leary), Doubleday & Co., New York 1955, pp. 157–164. G. H. von Wright, “IX: Deontic Logic: Hypothetical Norms”, [in] Norm and Action – A Logical Inquiry, Routledge & Kegan Paul, 1963, https://www.giffordlectures.org/books/norm-and-action/ixdeontic-logic-hypothetical-norms [2018-07-27]. J. Wagner, P. Mazurek, A. Miekina, R. Z. Morawski, F. F. Jacobsen, T. Therkildsen Sudmann, I. Træland Børsheim, K. Øvsthus, T. Ciamulski, “Comparison of two Techniques For Monitoring of Human Movements”, Measurement, 2017, Vol. 111, pp. 420–431. D. Walton, Methods of Argumentation, Cambridge University Press, New York 2013. T. B. Weber, “The Moral Dilemmas Debate, Deontic Logic, and the Impotence of Argument”, Argumentation, 2002, Vol. 16, No. 4, pp. 459–472.

482

References

J. P. Webster, M. Kaushik, G. C. Bristow, G. A. McConkey, “Toxoplasma Gondii Infection, from Predation to Schizophrenia: Can Animal Behaviour Help us Understand Human Behaviour? “, Journal of Experimental Biology, 2013, Vol. 216, No. 1, pp. 99–112. F. Wegener, Der Alchemist Franz Tausend: Alchemie und Nationalsozialismus, Kulturförderverein Ruhegebiet, BRD 2013. C. Whitbeck, Ethics in Engineering Practice and Research, Cambridge University Press, New York 2011 (2nd edition). N. Wiener, The Human Use of Human Beings: Cybernetics and Society, Free Association Books, London 1989 (first published in 1950). M. P. Wisniewski, R. Z. Morawski, A. Barwicz, “Modeling the Spectrometric Microtransducer”, IEEE Transactions on Instrumentation and Measurement, 1999, Vol. 48, No. 3, pp. 747–752. T. Witkowski, “Lucyfer, którego wymyślił Zimbardo”, W obronie rozumu, 2 lipca 2018 r., https://wp. me/p48IO4-qj [2018-07-06]. L. Wittgenstein, Tractatus logico-philosophicus, Kegan Paul Pub., London 1922. N. Wolchover, “A Fight for the Soul of Science”, Quanta Magazine, December 16, 2015, https:// www.quantamagazine.org/20151216-physicists-and-philosophers-debate-the-boundaries-ofscience/ [2017-05-26]. J. Woleński, “The History of Epistemology”, [in] Handbook of Epistemology (Eds. I. Niiniluoto, M. Sintonen, J. Woleński), Springer, Dordrecht 2004. J. Worrall, “Philosophy of Science: Classic Debates, Standard Problems, Future Prospects”, [in] The Blackwell Guide to the Philosophy of Science (Eds. P. Machamer, M. Silberstein), Blackwell Publishers Ltd, Malden (USA) – Oxford (UK) 2002, pp. 18–36. F. M. Wuketits, Der freie Wille, S. Hirzel Verlag, Berlin 2008. H. Zandvoort, “Requirements for the Social Acceptability of Risk-Generating Technological Activities”, [in] The Ethics of Technological Risk (Eds. L. Asveled, S. Roeser), Earthscan-Sterling, London 2009. J. Ziman, “Why Must Scientists Become More Ethically Sensitive Than They Used To Be?”, Science, 1998, Vol. 282, No. 5395, pp. 1813–1814, http://www.sciencemag.org/cgi/content/full/282/ 5395/1813 [2010-04-23]. J. M. Ziman, “Information, Communication, Knowledge”, Nature, 1969, Vol. 224, No. 5220, pp. 318–324. P. G. Zimbardo, The Lucifer Effect: Understanding How Good People Turn Evil, Random House, New York 2007. P. G. Zimbardo, R. L. Johnson, V. McCann, Psychology: Core Concepts, Books a la Carte Edition, Prentice Hall PTR, USA 2016 (8th edition).

Index of Names Addison, Thomas 459 Aelius Galenus 28, 30, 32, 146, 455, 458 Agassiz, Jean L. R. 454 Agazzi, Evandro 263, 276, 284 Aggarwal, Charu C. 215 Agricola, Georgius 453 Ajdukiewicz, Kazimierz 271, 353 Alcmaeon 455 Alexander the Great 28 Algoritmi 29 Alhazen 30 Alvarez, Luis 455 Alvarez, Walter 455 Ampère, André-Marie 446 Anders, Günther 435 Anderson, Herbert 449 Anscombe, G. Elizabeth M. 228 Apel, Karl-Otto 258 Appleton, Edward 454 Archimedes 27, 28, 30, 444 Aristophanes 246 Aristotle 3, 18, 19, 27, 28, 29, 30, 41, 115, 187, 230, 235, 248, 251, 264, 267, 351, 455 Arrhenius, Svante A. 451 Asher, Ralph 444 Audi, Robert 229 Aurelius, Marcus 249, 250 Avicenna 30, 45 Avogadro, Amadeo C. 451 Ayer, Alfred J. 166, 229 Baade, Walter 444 Babbage, Charles 220, 333 Bachelard, Gaston 39 Bacon, Francis 42, 43, 59, 264, 269 Bacon, Roger 41 Balmer, Johann J. 451 Banting, Frederick G. 460 Bardeen, John 449 Bartholin, Erasmus 445 Becquerel, Antoine H. 448 Bell, S. Jocelyn 444 Bentham, Jeremy 255, 256, 287 Berger, Hans 457 Berger, Lee R. 455

https://doi.org/10.1515/9783110584066-023

Berkeley, George 46 Bernard, Claude 45, 459 Bernoulli, Daniel 445 Best, Charles H. 460 Bethe, Hans A. 444 Betzig, Robert. E. 452 Bevan, Michael W. 458 Biot, Jean-Baptiste 35, 443 Bird, Alexander 175 Biruni 30 Black, Joseph 33 Blackburn, Simon 17, 234 Bloch, Felix 452 Bloch, Jonathan 455 Boehm, Hanns-Peter 452 Bohannon, John 373 Bohm, David J. 55 Bohr, Niels H. D. 36, 55, 448, 449 Bokulich, Alisa 87 Boltzmann, Ludwig E. 155, 451 BonJour, Laurence 200 Boring, Edwin G. 92 Born, Max 36 Bosanquet, Bernard 200 Bose, Satyendra N. 36 Boyd, Richard N. 58, 175, 228 Boyle, Robert 42, 450 Bradley, Francis H. 200 Brahe, Tyge O. 31, 171, 443 Brattain, Walter H. 449 Brenner, Anastasios 263 Bridges, Calvin 457 Bridgman, Percy W. 91 Brink, David O. 228 Broad, William J. 220 Broca, Pierre P. 456 Brongniart, Alexandre 453 Brown, Louise J. 460 Brown, Michael E. 444 Brown, Robert 446, 456 Buber, Martin 258 Buckland, William 453 Bull, Francis 460 Bunge, Mario A. 417 Bunsen, Robert W. E. 443, 451

484

Index of Names

Buridan, Jean 195 Bynum, Terrell W. 418 Campbell, Keith 458 Carlisle, Anthony 451 Carnap, Rudolf 47, 60 Carnot, Nicolas L. S. 446 Carothers, Wallace H. 452 Cartwright, Nancy 147 Cavendish, Henry 446, 450 Cello, Paul 458 Celsius, Anders 445 Celsus 458 Cesalpino, Andrea 456 Chadwick, James 449 Chakrabarty, Ananda M. 392 Chang, Hasok 203 Chapman Andrews, Roy 457 Cherenkov, Pavel A. 449 Chilton, Mary-Dell 458 Chyrowicz, Barbara 232 Clausius, Rudolf J. E. 447 Clerk Maxwell, James 24, 35, 59, 130, 155, 286, 447, 451 Cleve, Per T. 451 Cockroft, John D. 449 Code, Lorraine 231 Cohen, Martin 235 Cohen, Morris R. 55 Collip, James B. 460 Columbus, Christopher 156 Compton, Arthur H. 36, 448 Comte, I. M. Auguste F. X. 46 Conan Doyle, Arthur I. 197 Conklin, Edwin G. 219, 234 Cope, Edward D. 454 Copernicus, Nicolaus 31, 443 Cornell, Eric A. 452 Couper, Archibald S. 451 Cowan, Clyde L. 449 Crick, Francis H. C. 334, 457 Crowfoot Hodgkin, Dorothy M. 457 Curie, Pierre 448 Curl, Robert F. 452 Cuvier, Georges L. N. F. 453

da Vinci, Leonardo 437 d’Alembert, Jean Le Rond 33 Dalton, John 35, 220, 451 Damer, T. Edward 346 Dart, Raymond A. 454 Darwin, Charles R. 24, 28, 35, 52, 122, 456 Davaine, Casimir 459 Davis, Marguerite 460 Dawid, Richard 192 de Broglie, Louis V. P. R. 36 de Coulomb, Charles-Augustin 33 de Fermat, Pierre 124 De George, Richard T. 400 de Lamarck, Jean-Baptiste 33, 35, 456 de Laplace, Pierre-Simon 33 de Lavoisier, Antoine-Laurent 24, 33, 450, 451 de Maupertuis, Pierre L. M. 124 de Mondeville, Henri 458 de Saussure, Horace-Bénédict 453 Democritus 450 Descartes, René 42, 43, 187, 235 Dewar, James 451 Dewey, John 54, 288 Dioscorides 458 Dirac, Paul 36, 449 Disraeli, Benjamin 335 Dobson, Matthew 459 Domagk, Gerhard J. P. 457 Doppler, Christian A. 447 Dornbush, Albert 460 Dubislav, Walter 47 Dubois, M. Eugène F. T. 454 Duggar, Benjamin M. 460 Duhem, Pierre M. M. 159, 171 Ebert, Edward S. 152 Eijkman, Christiaan 459 Einstein, Albert 5, 24, 36, 37, 46, 122, 155, 157, 306, 448, 450 Einthoven, Willem 460 Elbakyan, Alexandra 412 Ellis, George 192 Ellul, Jacques 435 Epimenides 195 Eratosthenes 453

Index of Names

Euclid 6, 27, 28 Euler, Leonhard 33 Ewing, William M. 455 Fabry, Maurice P. A. C. 454 Fahrenheit, Daniel G. 445 Fanning, Shawn 411 Faraday, Michael 35, 446 Feigl, Herbert 47 Feldman, William 460 Fermi, Enrico 36, 449 Ferris, Timothy L. J. 89 Festinger, Leon 273 Feyerabend, Paul K. 53, 159, 190 Feynman, Richard P. 5, 312, 449 Field, Hartry H. 58 Flavell, Richard B. 458 Fleischman, Martin 405 Fleming, Alexander 460 Floridi, Luciano 417 Foucault, Jean B. L. 447 Fourier, Jean-Baptiste J. 446 Fracastero, Girolamo 458 Franck, James 157 Frank, Philipp 47 Franklin, Benjamin 446, 453 Franklin, Rosalind E. 334, 457 Frege, F. L. Gottlob 21, 47, 196 Fresnel, Augustin-Jean 35 Frisch, John 460 Fromm, Erich S. 273 Fuhlrott, Johann C. 454 Galileo Galilei 31, 42, 187, 443, 444, 445 Gallo, Robert C. 221, 461 Galston, Arthur W. 316 Galton, Francis 456 Gamow, George 444 Garfield, Eugene E. 438 Geber 29, 30 Gehlen, Arnold 239 Gell-Mann, Murray 450 George, Alexander 190 Gettier, Edmund L. 23 Gezari, Suvi 444 Giere, Ronald 56 Gilbert, William 445

485

Glashow, Sheldon L. 450 Glennan, Stuart S. 127, 134 Gödel, Kurt F. 47, 194, 199 Godlee, Rickman J. 459 Goldman, Alvin 56 Goldstein, Eugen 447 Golgi, Camillo 456 Goodman, Henry N. 50 Goodwin, Elizabeth 221 Grabski, Maciej W. 372 Gray, Tom 455 Greco, John 231 Grelling, Kurt 47 Gross, David J. 193 Grosseteste, Robert 30 Guettard, Jean-Etienne 453 Gurdon, John B. 457 Gusella, James F. 457 Guthrie, Samuel 459 Haack, Susan 202 Habermas, Jürgen 258, 428 Hahn, Otto 157, 449 Hall, Edwin H. 447 Halley, Edmond 443 Halsted, William 459 Hamilton, William R. 124 Hammurabi 246 Hanks, Thomas C. 455 Hanson, Norwood R. 50, 160 Hansson, Sven O. 191, 192, 303 Hare, Richard M. 229 Harman, Gilbert 173, 174, 200 Harvey, William 32, 42, 146, 456 Haslam, S. Alexander 273 Haworth, W. Norman. 452 Heaviside, Oliver 454 Heezen, Bruce C. 455 Hegel, Georg W. F. 235, 378, 399 Heidegger, Martin 435 Heisenberg, Werner K. 36, 55, 157, 449 Hell, Stefan W. 452 Helmholtz, Hermann L. F. 35, 50, 90, 447, 459 Hempel, Carl G. 47, 48, 49, 120, 123, 124, 125, 127, 144, 200 Henry, Joseph 446 Herophilos 27, 28, 455

486

Index of Names

Herschel, F. William 443, 446 Hertz, Heinrich R. 447 Hess, Victor F. 448 Hickey, Thomas J. 54 Hilbert, David 36, 47, 196, 198 Hildebrand, Dietrich R. A. 258 Hilleman, Maurice R. 460 Hillier, James 449 Hinshaw, Corwin 460 Hipparchus 443 Hippocrates 27, 28, 293, 458 Hobbes, Thomas 228 Hofmann, Albert 452 Hollingsworth, Rogers 153, 154, 157 Honderich, Ted W. 235 Hook, Sidney 55 Hooke, Robert 445, 453, 456 Hopkins, Frederick G. 460 Hottois, Gilbert 39 Houdry, Eugene J. 452 Hounsfield, Godfrey N. 460 Hubble, Edwin P. 37, 443 Hume, David 44, 50, 126, 166, 194, 242, 253, 255 Humphreys, Paul 143 Hunter, Rick 455 Hutcheson, Francis 253 Hutton, James 453 Huygens, Christiaan 445 Hwang Woo-Suk 332 Ichikawa, Koichi 460 Ingenhousz, Jan 456 Innocent II, Pope 437 Isaacs, Alick 460 Jackson, Frank C. 228 James, William 54 Jansen, Zacharias 443, 445 Jansky, Karl G. 444 Jenner, Edward 459 Johannsen, Wilhelm 457 Johanson, Donald C. 455 Johns, Richard 127 Joliot-Curie, Irene and Frederic 452 Jonas, Hans 294, 435, 436

Joule, James P. 35, 447 Justinian the Great 249 Kamerlingh Onnes, Heike 448 Kanamori, Hiroo 455 Kanare, Howard M. 322 Kant, Immanuel 44, 45, 232, 235, 237, 253, 254, 255, 258, 259, 287, 378, 443 Kekulé, Friedrich A. 451 Kennelly, Arthur 454 Kepler, Johannes 31, 121, 171, 197 Kiikeri, Mika 197 King, W. J. 389 Kinsella, Norman S. 399 Kipping, Frederic S. 451 Kirchhoff, Gustav R. 36, 75, 121, 443, 451 Kitcher, Philip S. 144, 266 Klein, Peter D. 201 Kołakowski, Leszek 441 Koch, H. H. Robert 36, 459 Kolff, Willem J. 460 Kothari, C. R. 183 Krebs, Hans A. 157 Kroto, Harold W. 452 Kuhn, Thomas S. 50, 51, 52, 159 Kuipers, Theodore A. F. 126, 182 Küng, Hans 261 Laennec, René-Théophile-Hyacinthe 459 Lagrange, Joseph-Louis 33 Lakatos, Imre 52, 191 Landsteiner, Karl 457 Latchman, David 307 Laudan, Larry 52, 53, 56, 59, 190 Lawler, Richard 460 Le Texier, Thibault 274 Lehrer, Keith 200 Leibniz, Gottfried W. 43 Leidy, Joseph M. 454 Lemaître, Georges H. J. E. 37, 444 Leucippus 450 Lewin, Kurt 155 Libavius 450 Libby, Willard F. 452 Lightbody, Brian 203 Linares, Jorge E. 436

Index of Names

Lind, James 458 Linnaeus, Carl 456 Lippershey, Hans 443, 445 Lipton, Peter 161 Locke, John 252, 378, 398, 399 Lockhart, Ted W. 300, 412 Løgstrup, Knud E. C. 273 Lomonosov, Mikhail V. 443 Lorentz, Hendrik A. 447 Lower, Richard 458 Lycan, William G. 200 Lyell, Charles 454 MacDonald, Ian 460 MacLaurin, Colin 43 Madey, John 395 Magnus, Albertus 41 Malus, Etienne-Louis 446 Mantell, Gideon A. 454 Mantzavinos, Chrysostomos 144, 145, 146 Marx, Karl 235, 378 Maskelyne, Nevil 446 Maslow, Abraham 154 Matthews, Drummond 455 Mayer, J. Robert 35, 447 McAuliffe, William H. B. 160 McCollum, Elmer V. 460 McDowell, John H. 57 McKay, David S. 444 McMillan, Edwin M. 449 Meitner, Lise 157 Melchiorri, Julian 458 Mellanby, Edward 460 Mendel, Gregor J. 122, 221, 456 Mendeleev, Dmitri I. 35, 451 Merton, Robert K. 269, 270, 271 Meyerhof, Otto F. 157 Michell, John 453 Michelson, Albert A. 447 Miescher, Johann F. 456 Milgram, Stanley 302 Mill, John S. 44, 45, 256 Miller, Seumas 237, 238 Miller, Stanley L. 457 Millikan, Robert A. 221, 333, 448 Milne, John 454 Mitchell, Tom M. 76 Moerner, William E. 452

487

Mohs, Carl F. C. 90 Montagnier, Luc A. 221, 461 Montmarquet, James A. 231 Moor, James H. 420 Moore, George E. 225, 229, 256 Morgan, Thomas 457 Morley, Edward W. 447 Morton, William T. G. 459 Müller, Erwin W. 449 Müller, Hermann 457 Murray, Charles 153, 155, 222 Nagel, Ernest 55 Nagel, Thomas 235 Nemo, Philippe 221, 222 Nernst, W. Hermann 448 Nero 249 Neurath, Otto 47, 200 Newton, Isaac 31, 33, 36, 43, 120, 121, 122, 171, 187, 220, 355, 443, 445 Nicholson, William 451 Nickles, Thomas 193 Nicol, Eduardo 435 Nietzsche, Friedrich W. 203 Niiniluoto, Ilkka M. O. 58 Nirenberg, Marshall W. 457 Norris, Kenneth S. 457 Nosek, Brian 373 Ohm, Georg S. 70, 75, 121, 446 Oldenburg, Henry 358 O'Neil, Catherine H. 421 O'Neil, Onora S. 273 Oort, Jan H. 444 Oparin, Aleksandr I. 457 Oppenheim, Paul 120 Oppenheimer, Julius R. 306 Ørsted, Hans C., 34, 446 Owen, Richard 454 Paracelsus 458 Parker, Sean 411 Pascal, Blaise 445 Pasteur, Louis 36, 221, 459 Pauli, Wolfgang E. 36, 448 Peirce, Benjamin 215 Peirce, Charles S. 46, 50, 54, 160, 161 Peregrinus, Peter 444

488

Index of Names

Pexton, Mark 136 Pinker, Steven A. 442 Planck, Max K. E. L. 36, 157, 448 Plato 22, 41, 165, 246, 247, 248, 264, 351 Plessner, Helmuth 239 Pliny the Elder 455 Pons, Stanley 405 Pople, John A. 452 Popovic, Mikulas 221 Popper, Karl R. 49, 52, 190, 191, 194, 313 Pravaz, Charles G. 459 Prebus, Albert 449 Priestley, Joseph 450 Psillos, Stathis 116 Ptolemaeus, Claudius 29, 443 Purcell, Edward M. 452 Putnam, Hilary W. 58 Pythagoras 27, 443, 453 Pytheas 453 Quine, Willard V. O. 171, 202, 277 Quiroga, Rodrigo Q. 7 Raman, Chandrasekhara V. 36 Ramon y Cajal, Santiago 456 Ramsay, William 451 Raup, David M. 455 Rawls, John B. 242, 259, 260, 378, 416 Reber, Grote 444 Reichenbach, Hans 47, 59, 149, 194 Reicher, Stephen D. 273 Reichstein, Tadeus 452 Reines, Frederick 449 Resnik, David B. 267, 270 Richter, Charles D. 454 Rinaldi, Arlena H. 428 Ritter, Johann W. 446 Rivera, Thomas 457 Rogers, Kara 63 Röntgen, Wilhelm C. 36, 447 Rosenberg, Alexander 55 Ross, Ronald 460 Rousseau, Jean-Jacques 228 Rüdenberg, Reinhold 449 Ruska, Ernst A. F. 449 Russell, Bertrand A. W. 23, 47, 202 Rutherford, Ernest 36, 448

Saint Augustine 250 Saint Thomas Aquinas 20, 30, 41, 236, 250, 251 Salam, M. Abdus 450 Salk, Jonas E. 460 Salmon, Wesley C. 115, 126, 135, 144 Sanger, Frederick 457 Sauveur, Joseph 445 Savart, Félix 35 Schaaffhausen, Hermann 454 Scheele, Carl 450 Scheler, Max F. 239, 257, 258 Schleiden, Matthias J. 35 Schlick, F. A. Moritz 47 Schön, Jan H. 332 Schönbein, Christian F. 451 Schopenhauer, Arthur 351 Schrödinger, Erwin R. J. A. 36, 127, 449 Schwann, Theodor 35, 456 Schweigger, Johann S. C. 446 Scriven, Michael J. 126 Searle, John R. 57 Seneca, Lucius A. 249, 250 Sepkoski, Joseph J. 455 Serban, Maria 147 Setton, Ralph 452 Shannon, Claude E. 24, 92 Shapley, Harlow 443 Shockley, William B. 449 Sidgwick, Henry 229 Silk, Joseph 192 Singer, Peter A. D. 330, 331 Skinner, Burrhus F. 92 Skłodowska-Curie, Marie 36, 448 Smalley, Richard E. 452 Smith, Adam 242, 253 Smith, William 453 Snow, John 459 Socrates 246, 248, 398 Sommerfeld, Arnold J. 36 Sosa, Ernest 231 Soubeiran, Eugene 459 Spencer, Herbert 241 Spinoza, Baruch 6, 235 Stapel, Diederik A. 332 Steensen, Niels 453 Stefan, Josef 447

Index of Names

Stengers, Isabelle 438 Sterrenberg Barghoorn, Elso B 457 Stevens, Stanley S. 92 Stevenson, Charles L. 229 Stevin, Simon 444 Stokes, George G. 447 Stoll, Arthur 452 Strassman, Friedrich W. 449 Strauss, Hermann 459 Strawson, Galen J. 57 Strevens, Michael 119, 145 Strunz, Karl H. 454 Stumpp, Eberhard 452 Sturgeon, William 446 Sturtevant, Alfred 457 Styczeń, Tadeusz 277 Summerlin, William 221 Suppes, Patrick C. 90 Swanson, William 389 Tarski, Alfred 47 Taseer Hussain, Sayed 455 Tatarkiewicz, Władysław 287 Tausend, Franz S. 40 Thagard, Paul R. 56, 191 Thales 27 Theorell, Axel H. T. 157 Thewissen, Johannes G. M. 455 Thompson, Benjamin 446 Thomson, Joseph J. 448 Thomson, William 81, 447 Tindale, Christopher W. 348 Tischner, Józef 258 Tombaugh, Clyde W. 444 Torricelli, Evangelista 445 Torvalds, Linus 407 Tsiolkovsky, Konstantin E. 447 Tsvet, Mikhail S. 452 Tucker, Steven 455 Turri, John 201 Urey, Harold C. 452, 457 van den Hoven, Jeroen 419 van Fraassen, Bastiaan C. 53, 54, 59, 84, 144 van Helmont, Jan 450 van Leeuwenhoek, Antonie P. 456 van Musschenbroek, Pieter 445 van Wesel, Andries 32

Veksler, Vladimir I. 449 Vening Meinesz, Felix A. 454 Vesalius, Andreas 41, 456 Vine, Frederick 455 von Auenbrugger, Leopold 458 von Fraunhofer, Joseph 443, 451 von Guericke, Otto 445 von Kleist, Ewald G. 445 von Klitzing, Klaus 450 von Laue, Max T. F. 36 von Liebig, Justus 459 von Linné, Carl 33 von Mises, Richard 47 von Neumann, John 36, 66 von Weizsäcker, Carl F. 444 Wade, Nicholas 220 Waksman, Selman A. 460 Wallas, Graham 151 Waller, Augustus D. 459 Walsh, Alan 452 Walzer, Michael 416 Watson, James D. 334, 457 Watt, James 130 Weber, K. E. Max 271, 272 Wegener, Alfred L. 454 Weinberg, Steven 450 Whewell, William 44, 45 Whipple, Fred L. 444 Whitehead, Alfred N. 47 Wieman, Carl E. 452 Wiener, Norbert 130, 417 Wiles, Philip 460 Wilkins, Maurice H. F. 334 William of Ockham 41, 46, 162 Williams, Bernard A. O. 225 Williams, Daniel H. 459 Wilmut, Ian 458 Wimmer, Eckard 458 Withering, William 459 Wittgenstein, Ludwig J. J. 6, 66, 85 Wöhler, Friedrich 451 Wojtyła, Karol J. 258 Wood, Alexander 459 Woodbridge, Frederick J. E. 55 Woodbury, Frank 460 Xenophanes 453 Xenophon 246

489

490

Index of Names

Yamagiwa, Katsusaburo 460 Young, Thomas 446 Yukawa, Hideki 449 Zabarella, Giacomo 41 Zeeman, Pieter 36 Zeno 249

Zilsel, Edgar 47 Ziman, John M. 344, 362 Zimbardo, Philip G. 273, 274 Zinn, Walter H. 449 zu Guttenberg, Karl-Theodor (Freiherr) 382 zur Hausen, Harald 461 Zwicky, Fritz 444

Index of Terms a priori information 80, 101, 110, 137, 140, 161, 163, 168, 170 abductive reasoning 16, 17, 27, 65, 76, 128, 137, 141, 196, 197, 198 academic freedom 281 academic science 265, 281 academic values 281 accuracy 69, 71, 74, 79, 82, 84, 93, 94, 130, 138, 142, 155, 165, 168, 169, 174, 183, 185, 267, 294, 300, 303, 353, 361, 372, 436, 438 accuracy of measurement 93, 94, 308 act utilitarianism 256, 257 aesthetics 19, 28, 239 agathic eudaimonism 247 alchemy 30, 39, 190 algebra 29, 170, 208 algebraic equations 71, 77, 81, 162, 170 algorithm 29, 77, 119, 136, 137, 140, 141, 161, 181, 215, 297, 299, 316, 324, 384, 391, 405, 421, 437 allocentric attitude 288 ambition 249, 310 analogue-to-digital converter 99, 112 anatomy 32, 196, 456 anonymity 418, 419 anthropology 2, 147, 152 antirealism 56 applied ethics 227, 339 applied sciences 2, 115, 157, 179, 265, 277, 280, 305, 315, 318 argument 6, 12, 18, 30, 48, 50, 58, 84, 116, 120, 130, 160, 175, 200, 272, 291, 293, 350, 352, 381, 400 argumentum ad hominem 351 arrogance 152 artificial intelligence 76, 136, 147, 160, 194, 438 artificial neural network 77, 138, 170 astrology 29, 39, 52, 190, 191 astronomy 2, 3, 26, 28, 29, 32, 39, 121, 137, 152, 197, 443 astrophysics 4, 135 atom 36, 63, 96, 99, 448, 452 authorship 309, 358, 359, 360, 361, 366, 382 automation of experiments 422 autonomy 154, 155, 417, 436

axiological neutrality of science 265, 266, 306 axiology 19, 238, 239, 240, 263, 266 background knowledge 133, 174, 178, 196, 197 backward causation 128 base quantity 95, 202, 203 basic science 2, 277, 318 basis function 162 Bayesian classifier 77 Bayesian network 140, 143 Bayesianism 59, 60 Beer-Lambert’s law 122 bibliometric indicator 218, 312, 356, 362, 365, 366, 409 bidirectional dilemma 13 binomial nomenclature 33 biochemistry 36, 73, 152, 321 biocybernetics 321 biology v, 2, 3, 28, 35, 36, 37, 38, 45, 51, 52, 61, 119, 122, 123, 133, 135, 137, 147, 152, 153, 202, 241, 334, 408, 455 biomedical engineering 321, 330 biophysics 73, 321 Biot-Savart law 35 blackbody problem 36 black-box model 71, 73, 74, 75, 137, 138 botany 33 Boyle’s law 121 calibration 98, 101, 103, 104, 106, 108, 110, 111, 112, 113, 142, 202, 205, 211 canonical measuring system 98, 99, 102, 104, 106 care and respect 264 case-based approach 289 casuistry 291 categorical imperative 254, 259, 289, 294 categorical norm 226 causal explanation 41, 123, 126, 127, 136, 142 causal mechanistic scheme 144 causality 45, 47, 127, 128, 129, 136, 145, 199, 234, 238, 243, 349 central limit theorem 210 chain deduction 13 chain letters 428 charity 251

492

Index of Terms

chastity 251 chemistry v, 2, 32, 33, 34, 35, 37, 38, 64, 119, 123, 135, 147, 152, 153, 202, 241, 313, 321, 336, 450, 451, 452, 458 Cherenkov effect 449 chromatography 217, 452 citation 8, 355, 356, 358, 359, 360, 362, 365, 366, 388 citizen science 39 classical science 25, 39, 59, 121, 311 classificatory evidence 173 clinical trial 217, 220, 317, 330 co-authorship 362, 363 code of good academic practice 270, 439 code of research ethics 270, 439 cognitive process 41, 47, 225 cognitive science 4, 56, 135, 137, 147, 241, 344 coherence 56, 95, 98, 165, 180, 200, 202, 203, 231, 263, 290 coherence definition of truth 20 coherence measure 180 coherentism 199, 200, 201, 202 combined standard uncertainty 209 commandment of love 250, 261 common wisdom 247 common-sense realism 57 communalism 269 communication channel 92, 93 communication system 92, 93 commutation 13 compatibilism 234 completeness 263, 369 composition 13, 217, 301, 353, 362, 369, 381, 390, 443, 444, 450, 454 compulsory licence 395, 396 computational explanation 136, 141 computer ethics 417, 418 computer science 2, 66, 147, 371 computer-aided explanation 136 conceit 249, 375 conditional probability 60, 106, 178 confidence 152, 209, 250, 268, 279, 361 confidentiality 270, 329, 367, 418, 419, 429, 434, 441 conflict of conscience 282 conflict of interest 263, 268, 278, 279, 280, 281, 311, 329, 367, 368, 370, 376 conflict of obligation 281 conjunction 12, 13, 99, 172, 190, 201, 261

conscience 243, 251, 252, 259, 292, 302, 339 consensus 91, 165, 289 consensus definition of truth 20 consequentialist ethics 230, 232, 233, 236, 239, 286, 293, 299 conservation laws 123 consistency 146, 193, 195, 198, 199, 230, 290 constitutive rules 146 constrained optimisation 298, 303 constructive dilemma 13 constructive empiricism 53, 84 contract 228, 364, 377, 383, 393, 396, 402, 403 conventionalism (metaethical) 228 cookies 419 cooking data 333 copyright 355, 356, 359, 360, 378, 380, 382, 383, 384, 385, 387, 388, 397, 398, 399, 401, 402, 403, 407, 408, 410, 411, 412, 429 corpuscular theory of light 35 correspondence definition of truth 19, 20 corresponding author 360 corroboration 165, 166, 176 cosmology 37, 43, 152 Coulomb’s inverse-square law 33 counterfactual dependence 132 courage 216, 231, 247, 248, 251, 284, 433 creative thought 152 criterion of truth 20 critical rationalism 49, 52 cum hoc, ergo propter hoc 349 curiosity-driven research 438 Cuvier’s law of correlation 122 cyclic causality 128 Darwin’s theory of evolution 52, 122, 456 data mining 76, 215, 364 de Morgan’s theorem 13 deception 191, 309, 310 decision theory 241, 294, 303, 304 deductive-nomological explanation 48, 124, 125 definiendum 10, 11 definiens 10, 11, 22 demarcation problem 189, 190, 193, 265 deontological ethics 230, 231, 232, 233, 239, 253, 286, 287, 299, 433 dependence approach 132

Index of Terms

dependent patent 396 derived quantity 95, 97, 202 descriptive ethics 227, 240 descriptive rationalism 268 descriptive realism 268 destructive dilemma 13 determinism 234, 235 developmental psychology 241 diagnosis 27, 119, 425, 458 dialectical materialism 322 difference measure 180 differentia 11 dilemma vi, 13, 233, 283, 284, 285, 286, 287, 303, 321, 329, 339, 363, 366, 375, 377, 434, 441 diligence 182, 221, 251, 279, 325, 341, 344, 369 Dirac delta function 198 Dirac’s razor 122 dishonesty 433 disinterestedness 269 disjunction 12 disjunctive syllogism 13 dominance 152 domino causality 128 double negation 13 droit d’auteur 379 duplicate publication 356, 365 dura lex, sed lex 377 duty 11, 188, 227, 231, 232, 242, 243, 250, 251, 254, 255, 261, 284, 286, 293, 336, 377, 426, 433 dynamical system 81, 198 economic author’s rights 382, 383, 384 economics 2, 41, 63, 66, 232, 241, 242, 382 editorial guidelines 429 education 7, 8, 108, 152, 155, 156, 158, 187, 197, 220, 221, 252, 270, 280, 281, 319, 329, 337, 416, 424, 427, 435, 439, 440 egocentric attitude 288 egoistic values 311 electric dynamo 34 electricity 33, 34, 35, 121, 446 electromechanical devices 339 electron 234, 333, 448, 449 elegance 267 embarrassment 310

493

emotivism (metaethical) 228, 229 empirical adequacy 54 empiricism 42, 44, 46, 47, 53, 84 engineering v, vi, 7, 17, 28, 34, 73, 89, 123, 127, 131, 138, 140, 150, 181, 198, 202, 217, 251, 283, 288, 294, 311, 317, 318, 321, 330, 338, 339, 340, 371, 435, 439 entitlement 310 entomology 39 enumerative induction 173, 174 epistemic integrity 264 epistemic values 56, 264, 277 epistemological anarchism 53 epistemological naturalism 56, 188 epistemological realism 58 epistemological relativism 53 epistemology 19, 41, 45, 56, 143, 180, 202, 238, 239 epistemology of measurement 204 ergonomics 321, 341 eristic 351, 352, 353 error of measurement 94, 95, 111 ethical intellectualism 246, 248 ethical values 229, 264, 288 ethics of consequences 224, 230 ethics of discourse 220, 258, 259 ethics of duty 224, 230, 254 ethics of justice 259, 260, 292 ethnology of morality 240 ethology 152, 227 eudaimonia 247, 248 eudaimonism 247 eugenic programme 327 euphemistic language 358 euthanasia 327 evidence 22, 26, 37, 43, 46, 49, 50, 51, 53, 54, 59, 60, 122, 126, 133, 151, 158, 165, 166, 167, 171, 172, 173, 175, 176, 178, 179, 180, 181, 190, 192, 194, 196, 197, 203, 217, 230, 250, 320, 323, 354, 365, 373, 421, 444, 454 evidence definition of truth 20 evolutionary psychology 241 existential generalisation 15, 16, 166, 167 existential instantiation 15, 16 existential quantifier 15 expanded uncertainty 209, 210, 211 experimental philosophy 18

494

Index of Terms

experimentation 34, 88, 171, 193, 204, 217, 272, 320, 325, 326, 327, 329, 330, 331, 336, 338, 340, 358, 369, 423, 426 explanandum 48, 116, 120, 124, 125, 145, 146 explanans 48, 116, 120, 124, 125, 145, 196 explanation by specification 126 explanatory games 146 explanatory pluralism 144, 147 explanatory power 56, 115, 135, 162, 174 exportation 13 Extended Liar’s Paradox 195 extended model 69, 86, 111, 112, 142 extensional definition 11 fabrication 279, 308, 332, 333, 335, 339, 344, 354, 374 faith 250, 309, 395 fallacious argument 348, 351 falsification 45, 49, 52, 161, 165, 167, 174, 176, 180, 194, 221, 279, 308, 309, 332, 333, 335, 340, 344, 354, 368, 369, 374 Fat-man Dilemma 284 Fermat’s principle 124 field theory 34, 35 fine arts 28, 378 First law of thermodynamics 447 forging data 333 formal ethics 232, 253, 254, 257, 258, 259, 276 formal logic 12, 43, 46 formal sciences 2, 6 fortitude 230, 251 forward-model-based approach 110, 111 foundationalism 199, 201, 202, 203 foundherentism 202, 203 Free Software movement 406, 407 free will 234, 235, 238, 254 freedom 19, 47, 222, 234, 249, 252, 259, 265, 270, 271, 272, 276, 277, 281, 287, 299, 302, 306, 314, 320, 335, 338, 381, 399, 400, 406, 407, 412, 413, 416, 418, 427, 432, 434, 437, 441 freedom of speech 252, 271, 432 freedom of thought 270, 271 freedom to choose research methods 271 freedom to choose research topics 271, 272

friendliness 249 fruitfulness 51, 263 functional magnetic resonance 240 game theory 66, 241, 294 Gay-Lussac’s law 121 general ethics 7, 223, 224, 227, 245, 326 General Public License 406 general theory of relativity 37, 448 generality 56, 267, 270, 289, 290, 441 generosity 248 genetic fallacy 350 genus 11, 116 geographical indication 377 geology 2, 3, 123, 453 geometrical style 6 geophysics 152 ghost authorship 363 gift authorship 362 golden rule 261, 292 good temper 249 graph theory 33 grey-box model 71, 73, 74 Hall effect 447, 450 Hammerstein-Wiener equation 198 happiness 225, 246, 247, 249, 251, 254, 256, 257, 293, 299, 350 heat 33, 35, 446, 447, 459 hedonistic utilitarianism 256 Heisenberg’s principle of indeterminacy 122 heliocentric model 171 historiosophy 19 hoaxing data 333 Homo sapiens 37, 38, 196, 219, 458 homomorphism 64, 65, 90 honest mistake 310 honesty 3, 216, 223, 227, 263, 291, 325, 347, 352, 426 honorary authorship 362 hope 34, 250, 358, 438 hostility 152, 228, 431 human creativity 151, 155 human enhancement 317 human rights 252, 261, 271, 437 humanities 2, 53, 189, 226, 238, 239, 373, 438 humility 231, 251, 375, 433, 434 hydrostatics 28

Index of Terms

hypothesis 35, 37, 49, 50, 59, 60, 150, 151, 153, 155, 156, 159, 160, 161, 162, 163, 166, 167, 171, 172, 173, 174, 176, 177, 178, 179, 181, 183, 193, 196, 199, 216, 222, 230, 235, 238, 307, 309, 333, 354, 364, 366, 444, 447, 450, 451, 455, 457 hypothetical norm 226 hypothetical syllogism 13 hypothetico-deductivism 42, 44 illumination 152 impartiality 258, 433 implication 12, 13, 51, 60, 347 importation 13 incommensurable values 286, 287 incompatible values 286, 287 incrementalism 310 incubation 151, 152, 158 independence 5, 152, 216 indeterminism 234 indirect measurement 102, 205 individual ethics 227 inductive reasoning 16, 17, 42, 43, 44, 45, 49, 50, 137, 166, 173, 193, 194, 196, 197 inductivestatistical explanation 124 inductivism 43, 44 industrial design 377, 381 industrial property 377, 379 industrial science 265, 311 infectious disease 36, 118, 306, 458 inference to the best explanation 160, 173, 174, 175, 176 infinite regression 30 infinitesimal calculus 33 infinitism 201, 202 influence quantity 103, 104, 107 information vi, 2, 3, 4, 5, 6, 8, 11, 17, 24, 32, 45, 50, 56, 67, 70, 71, 77, 80, 81, 84, 91, 92, 93, 94, 98, 99, 101, 104, 107, 108, 110, 119, 138, 140, 141, 142, 145, 155, 158, 159, 160, 161, 163, 168, 170, 173, 177, 178, 180, 181, 182, 184, 187, 192, 193, 198, 200, 204, 205, 207, 209, 211, 212, 218, 221, 223, 224, 237, 239, 242, 252, 257, 265, 266, 269, 275, 276, 277, 288, 289, 292, 297, 298, 303, 305, 306, 307, 309, 310, 317, 321, 324, 325, 328, 335, 338, 339, 343, 344, 345, 346, 350, 352, 355, 356, 357, 360, 362, 367, 374, 376, 377, 378, 381, 382, 385, 387, 388, 391, 394, 396, 397,

495

399, 408, 409, 411, 415, 416, 417, 418, 419, 422, 423, 424, 425, 426, 427, 428, 429, 432, 433, 434, 437, 453 information ethics 417 information process 343, 344, 345 information sciences 3, 4 information technology 415, 417, 422 information theory 2, 6, 24, 92 information-theoretic approach 92, 93 informativeness 69, 70 informed consent 328, 329, 436 infosphere 325, 337, 367, 418, 429, 430 initial condition 116, 120 injustice 420, 433 input quantity 68, 71, 73, 99, 131 institutional plagiarism 403 instrumentalism 56, 83, 84, 87, 88 integrated circuits 340 integrity 192, 231, 263, 264, 268, 279, 308, 310, 325, 332, 434 intellectual competition 280, 372, 375, 439 intellectual courage 231 intellectual humility 231 intellectual intuition 230 intellectual property 270, 279, 291, 311, 322, 364, 367, 377, 378, 379, 380, 389, 396, 397, 398, 399, 400, 401, 402, 406, 412, 413, 418, 422 intellectual responsibility 231 intellectual virtue 230, 231, 248, 267, 284, 299 intensional definition 11, 66, 225 internet ethics and cyberethics 417 interpolation 80, 161, 162, 167, 168, 169, 170 intersubjective verifiability 216 intimacy 417 introversion 152 intuitionism (metaethical) 228, 229 INUS condition 132 invention 30, 31, 41, 149, 150, 157, 159, 221, 269, 317, 364, 377, 387, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 400, 401, 404, 405, 427, 430 inventive thought 152 inverse problem 71, 163, 179 inverse-model-based approach 110, 111 isomorphism 64, 65 joint method of agreement and difference 45 Joule’s first law 447

496

Index of Terms

jurisprudence 19, 260, 391, 397 justice 227, 230, 247, 250, 251, 257, 260, 286, 292, 299, 302, 378, 416, 420, 433, 436 kairetic scheme 145 Kepler’s law 31, 121 kindness 241, 251, 288 Kirchhoff’s law 75, 121 knowledge v, vi, 1, 2, 3, 8, 9, 18, 19, 21, 22, 23, 24, 25, 26, 30, 32, 34, 39, 41, 42, 43, 45, 46, 47, 48, 54, 55, 56, 57, 58, 69, 84, 85, 91, 115, 116, 118, 119, 121, 122, 128, 133, 138, 148, 149, 150, 151, 155, 156, 158, 165, 170, 172, 173, 174, 176, 177, 178, 179, 180, 181, 187, 188, 189, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 211, 215, 216, 217, 218, 223, 224, 231, 237, 238, 239, 240, 242, 245, 246, 247, 253, 256, 257, 265, 266, 268, 269, 274, 275, 277, 281, 285, 301, 305, 313, 316, 326, 340, 343, 344, 346, 348, 353, 355, 368, 369, 386, 387, 390, 391, 406, 409, 412, 418, 424, 425, 437, 441, 442, 455 knowledge society 265, 406 laboratory 39, 215, 221, 292, 309, 310, 322, 323, 333, 336, 337, 338, 340, 341, 362, 398, 405, 447 laboratory equipment 341 laboratory notebook 322, 323, 333, 337 laboratory personnel 341 Lamarck’s law of evolution 122 law of excluded middle 13 law of non-contradiction 13 law of refraction 122 laws of conservation 121, 123, 136 laws of motion 31, 355, 445 lexical definition 10 life sciences 4, 126, 239, 321, 373 light 17, 30, 33, 35, 43, 64, 96, 102, 103, 109, 118, 123, 129, 135, 156, 157, 171, 172, 188, 221, 240, 250, 391, 394, 443, 444, 445, 446, 447, 448, 449, 451, 452, 458 linear causality 128 linguistics 28, 147, 344 logic 2, 7, 9, 12, 28, 43, 46, 47, 48, 146, 160, 166, 183, 201, 254, 347 logical empiricism 46, 47

logical equations 71 logical positivism 46, 47, 49, 53, 165 log-likelihood ratio measure 180 log-ratio measure 180 love 18, 155, 231, 248, 250, 251, 256, 258, 261 low-pass electrical filter 75, 128, 138 machine learning 76, 137, 138, 140, 141, 142, 194, 215, 294, 344 magnanimity 248, 258 magnetic resonance imaging 119 magnetism 33, 34, 35, 446 manipulationist account 144 market competition 372, 375 Markov network brains 140 mass media 433 material equivalence 13 material ethics of value 257, 258 material implication 13 material property 225, 377, 398, 399, 400, 401 mathematical model 31, 33, 35, 66, 68, 70, 71, 72, 75, 80, 81, 84, 85, 87, 90, 100, 101, 106, 109, 110, 117, 118, 128, 130, 138, 143, 163, 212, 213, 296 mathematical model of conversion 100, 101, 110, 212, 213 mathematical model of reconstruction 100, 101, 110 mathematical modelling v, 8, 10, 33, 42, 63, 65, 66, 67, 69, 70, 71, 73, 79, 80, 81, 83, 84, 85, 108, 113, 134, 143, 148, 163, 197, 203, 204, 215, 294, 298 mathematics vi, 2, 3, 6, 26, 28, 29, 31, 33, 42, 46, 47, 64, 121, 152, 153, 190, 193, 196, 198, 201, 378 Maupertuis’ principle of least action 121, 124 measurand 94, 95, 100, 101, 102, 104, 105, 106, 107, 108, 109, 110, 111, 205, 209, 212, 213 measure of simplicity 163 measurement accuracy 94 measurement bias 94 measurement data 72, 73, 74, 81, 85, 128, 128, 129, 139, 172, 189, 211, 215, 267, 297, 335, 354, 391 measurement error 94, 95, 161, 169, 172, 184 measurement process 92, 105, 212, 213

Index of Terms

measurement science v, 81, 88, 89, 93, 108, 163, 202, 203, 321 measurement science and technology 81, 88, 202, 203 measurement uncertainty 94, 95, 98, 111, 113, 172, 184, 202, 204, 205, 208, 211, 212, 217, 321 measuring system 92, 93, 94, 98, 99, 102, 104, 106, 112, 113, 204, 205, 209, 212, 213, 214, 215, 341 mechanics 28, 30, 31, 32, 33, 36, 37, 43, 46, 50, 51, 57, 64, 121, 122, 124, 133, 155, 157, 171, 445, 449 mechanism 35, 116, 119, 127, 133, 134, 135, 136, 142, 144, 178, 179, 216, 419, 456 media and communication technoethics 417 medicine v, 21, 26, 27, 28, 32, 45, 137, 152, 153, 241, 290, 299, 301, 307, 314, 329, 330, 378, 395, 408, 437, 457, 458, 459 Mendel’s laws of inheritance 122 metacognitive thought 152 metaethics 224, 227, 228, 229, 283 meta-invention 221 metallurgy 34 meta-model of measurement 98, 99, 100, 106, 107, 108, 205, 212, 213 metaphysics 18, 19, 28, 46 method of agreement 45 method of concomitant variation 45 method of curves 45 method of difference 45 method of least squares 45 method of means 45 method of residues 45 methodological naturalism 56 metrological traceability 98, 202, 203 microscope 35, 89, 204, 445, 449 minimum canonical measuring system 99 model overfitting 79 modelling-based approach 94 moderate indeterminism 234 moderation 26, 230, 248, 429, 433, 434 modus ponens 13, 14, 16, 120 modus tollens 13, 14, 167 mono-criterial optimisation 296, 297, 298, 299 Monte Carlo method 207, 208, 209 moral author’s rights 382, 384 moral confusion 417

497

moral dilemma 283, 284, 285, 286, 303, 321, 366, 375, 377 moral good 225, 227, 228, 229, 292, 433 moral history 240 moral integrity 264 moral judgment 226, 227, 228, 229, 253, 305 moral norm 155, 226, 227, 228, 232, 233, 236, 239, 242, 243, 258, 259, 261, 288, 291, 292 moral philosophy 19, 228, 303, 441 moral psychology 240 moral relativism 53, 288 moral responsibility 231, 233, 234, 235, 237, 238, 240, 246, 318, 328 moral sociology 240 moral virtue 230, 231, 248, 284 morality 219, 225, 226, 227, 231, 235, 239, 241, 242, 243, 245, 248, 253, 254, 276, 288, 302, 391 motor 34, 446 multi-criterial optimisation 294, 298 multi-objective programming 294 multivariate adaptive regression splines 140 mutual causality 128, 129 mutual information 93 nanotechnology 220, 318, 416, 452 natural language 10, 15, 48, 66, 197 natural philosophy 32, 38 natural sciences 2, 3, 6, 19, 56, 106, 226, 438 natural selection 24, 35, 51, 435, 456 naturalism 55, 56, 226 naturalism (metaethical) 228 naturalistic fallacy 242 naïve realism 57, 88 necessary and sufficient condition 10, 11, 15, 23, 50, 65, 132, 190 necessity and possibility 238 negation 12, 13 negative rights 252 negligence mistake 309, 313, 333, 350 neo-pragmatism 54 netiquette 427, 428, 430, 431, 432 neuroethics 242 neurology 241 neurophysiology 241 neurosciences 76 new mechanical philosophy 133, 135

498

Index of Terms

new mechanism 133 New Riddle of Induction 50 Newton’s law of gravitation 121 Newton’s law of motion 120, 122, 355 nominal definition 10 nominal definition by description 10 nominal definition by etymology 10 nominal definition by example 10 nominal definition by synonym 10 nominalism 46 nomological dependence 132 nomological explanation 120 normal distribution 206, 210, 211 normative ethics 227, 229 normative naturalism 56 normative rationalism 268 normative realism 268 notoriety 311 novelty 315, 316, 319, 356, 369 nuclear physics 37 number theory 33 numerical methods 33, 163 objectivist 228 objectivity 122, 223, 263, 267, 268, 277, 278, 279, 291, 311, 352, 354, 372, 374, 433 observation v, 16, 17, 18, 24, 26, 27, 28, 31, 34, 37, 41, 42, 43, 44, 48, 49, 50, 57, 59, 60, 68, 85, 89, 145, 156, 157, 159, 163, 166, 171, 172, 173, 174, 176, 177, 178, 180, 181, 185, 188, 189, 190, 192, 197, 203, 204, 215, 255, 261, 267, 286, 287, 290, 293, 312, 316, 320, 323, 350, 362, 401, 420, 421, 426, 435,443, 450 observation language 48, 50 Ockham’s razor 162 Ohm’s law 75, 70, 121, 446 ontological naturalism 56 ontology 19, 238 Open Access movement 407, 408, 416 Open Source movement 407, 424 open-access journal 371, 373, 408, 409 openness 152, 158, 223, 269, 278, 281, 311, 337, 354, 364, 375, 399, 408, 410 operationalist approach 91 operator of measurand reconstruction 102, 104 optics 30, 32, 43, 122 ordinal scale 90

ordinary differential equations 33, 70, 71, 109, 138, 163 organised criticism 49, 216, 314, 367 organised scepticism 269 originality 8, 267, 315, 323, 368, 369, 381 ornithology 5, 39 ostensive definition 7, 10, 137 outlier 212, 215, 216, 333, 334 output quantity 68, 71, 73, 93 oxygen 24, 33, 73, 125, 134, 450, 458 paleontology 453 paradigm 4, 36, 50, 51, 52, 83, 135, 136, 147, 159, 166, 199, 215, 261, 322, 349, 374, 406 Paradox of the Ravens 49, 50, 181 parametric identification 68, 71, 73, 74, 77, 81, 82, 90, 110, 198 Pareto optimisation 294 partial differential equations 35, 71, 72, 73 particle physics 37, 450 patent 150, 280, 281, 367, 377, 378, 380, 381, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 400, 401, 402, 403, 404, 405, 406, 412 patent application 390, 391, 393, 394, 405 patience 251 peer review 191, 273, 309, 367, 371, 372, 373, 374, 409 perceptual thought 152 perfectionist eudaimonism 248 performance thought 152 periodic table of elements 35 Phariseeism 288 phenomenalism 46 phenomenological approach 420 philosophical anthropology 19, 238, 239 philosophy of education 19 philosophy of history 19 philosophy of information 19 philosophy of language 19 philosophy of law 19, 239, 240 philosophy of medicine 19 philosophy of mind 19 philosophy of politics 19 philosophy of religion 19 philosophy of science v, 1, 2, 3, 4, 5, 7, 8, 19, 41, 42, 46, 49, 50, 53, 54, 55, 58, 61, 89, 115, 116, 143, 149, 152, 156, 159, 176, 182, 184, 187, 190, 219, 224, 238, 240, 263, 276, 435, 441

Index of Terms

phlogiston 33, 450 photoelectric effect 36 physics v, 2, 3, 4, 5, 10, 18, 19, 28, 32, 33, 36, 37, 38, 41, 43, 46, 50, 51, 53, 55, 57, 61, 64, 66, 91, 115, 119, 121, 122, 123, 127, 128, 133, 136, 137, 147, 152, 153, 154, 163, 166, 192, 196, 202, 217, 227, 234, 298, 313, 321, 444, 450 physiology 26, 30, 32, 36, 73, 134, 152, 153, 301, 455, 458 plagiarism 268, 279, 308, 309, 333, 340, 354, 355, 356, 357, 362, 369, 371, 381, 387, 388, 389, 403 pluralistic utilitarianism 256 political philosophy 19 positive rights 252 post hoc, ergo propter hoc 349 pragmatic account 144, 145 pragmatic definition of truth 20, 165 pragmatic model of scientific inquiry 46 precision 32, 70, 163, 174, 263, 267, 297, 336, 424 predicate 12, 15, 47, 87, 228 prediction 1, 24, 29, 52, 55, 84, 86, 115, 117, 118, 119, 135, 137, 138, 140, 142, 143, 166, 171, 172, 178, 188, 191, 192, 224, 323, 433, 443, 451 prestige authorship 362 pride 267, 375 primary good 260, 378, 416 primum non nocere 293, 316, 339 principle of autonomy and informed consent 436 principle of benevolence 346, 347 principle of citation 381 principle of consensus 346, 347 principle of distributive and retributive justice 436 principle of double effect 236, 241 principle of equal rights 346 principle of fair equality of opportunities 260 principle of honesty 346, 347 principle of justice 259, 260, 286, 416, 437 principle of least time 124 principle of liberty 259 principle of minimum protection 379 principle of most-favoured-nation treatment 379 principle of national treatment 379

499

principle of parsimony 43, 162 principle of prudence 436, 437 principle of relevance 346, 347 principle of responsibility 346, 347, 436 principle of stationary action 124 principle of sufficient reason 43 principle-based approach 289 privacy 252, 290, 399, 412, 413, 417, 418, 419, 429 probabilistic dependence 132 probability density function 206, 209, 210, 211 probability distribution 106, 142, 209 problem of infinite regress 199, 201, 202 production approach 133 professional standards 274 progressive coherentism 203 proof 6, 18, 28, 53, 165, 166, 168, 176, 194, 307, 351, 352, 353, 355, 431, 444, 457 proper ambition 249 proper shame 249 propriété industrielle 379 prospective responsibility 235 protoscience 25, 26, 27, 28, 39, 41 prudence 230, 248, 250, 251, 433, 436, 437 pseudoscience 189, 191, 192 psychoanalysis 41, 52 psychology 2, 3, 4, 28, 30, 51, 76, 106, 132, 133, 147, 149, 181, 226, 239, 240, 241, 273, 321, 322, 332, 338, 344, 373, 426 pure sciences 2 qualitative evidence 173 quality of information 181, 424 quantitative evidence 173 quantophrenia 181, 438 quantum mechanics 36, 37, 46, 57, 64, 122, 124, 157 radiation 36, 57, 67, 96, 144, 309, 321, 339, 341, 446, 447, 448, 454 radioactivity 36, 149, 448 random error of measurement 94 random measurement error 94 rational discussion 216, 224, 346, 348, 353 rationalisation 310, 319 rationalism 42, 44, 49, 52, 268 rationality 27, 230, 300, 347, 353 raw result of measurement 100, 101, 102, 104, 106, 111, 212, 213

500

Index of Terms

real definition 10, 11 real definition by cause 11 real definition by description 11 real definition by genus and specific difference 10, 11 real-world data 224, 325 relational causality 128, 129 relational system 64, 67, 90 relativist 228 reliability 2, 3, 79, 83, 108, 182, 184, 185, 305, 321, 338, 354, 407, 418, 421, 424 replicability crisis 373 replicability of research results 216 representational realism 57 representational theory of measurement 90, 91 research ethics 8, 189, 219, 220, 223, 224, 229, 231, 238, 240, 245, 263, 270, 275, 291, 292, 300, 301, 303, 305, 308, 313, 326, 329, 333, 335, 339, 340, 435, 439 research fraud 307 research integrity 231, 263, 264, 308, 310 research methodology v, 16, 42, 45, 84, 88, 147, 175, 182, 183, 184, 188, 216, 217, 219, 260, 268, 305, 313, 320, 321, 331, 337, 340, 354, 366, 369, 374, 422, 435, 439 research misconduct 307, 308, 309, 310, 313, 325, 332, 340 research programme 47, 52, 154, 191, 265, 320 research rituals 5, 325 research tradition 52, 53 responsibility v, 222, 231, 233, 234, 235, 236, 237, 238, 241, 242, 265, 274, 287, 306, 313, 318, 328, 330, 346, 347, 357, 359, 360, 367, 372, 373, 418, 424, 425, 431, 432, 435, 436, 438, 441 result of measurement 89, 98, 100, 101, 102, 104, 105, 106, 108, 109, 110, 111, 120, 211, 212, 213, 215, 217, 300, 323, 333 retaliation principle 246 retrocausality 127 retrospective responsibility 235 reverse plagiarism 388 review 8, 87, 151, 191, 216, 217, 268, 270, 273, 309, 310, 344, 360, 361, 367, 368, 369, 370, 371, 372, 373, 374, 404, 409, 422 reviewer 279, 292, 344, 354, 367, 368, 370, 371, 372, 373, 376, 409 rhetoric 28, 53, 346

rhetorical discussion 346 righteous indignation 249 rigour 92, 263 risk 6, 79, 88, 187, 224, 257, 272, 276, 278, 300, 302, 303, 314, 315, 316, 317, 319, 321, 326, 329, 330, 337, 339, 340, 399, 402, 410, 420, 424, 425, 429, 430, 436 robot ethics 304 rule of conditioning 179 rule of contrapositive 13 rule of detachment 13, 14 rule utilitarianism 256, 257 rules of deductive inference 13, 15, 16, 120 rules of inference 15, 16, 146 rules of representation 146 rules of scope 146, 147 science studies 1 scientific creativity 150, 151, 153, 154, 155, 156, 158 scientific criticism 312, 371, 372 scientific evidence 53, 176, 320 scientific explanation 45, 48, 50, 55, 87, 115, 116, 117, 119, 120, 121, 125, 126, 127, 136, 144, 145, 147, 148, 177, 349 scientific jargon 358 scientific knowledge 1, 2, 8, 21, 23, 24, 39, 41, 42, 47, 49, 55, 57, 58, 115, 118, 149, 150, 151, 172, 181, 187, 188, 189, 192, 193, 194, 196, 198, 199, 203, 215, 216, 218, 240, 269, 275, 277, 305, 313, 326, 343, 344, 353, 391, 412, 437, 442 scientific law 23, 121 scientific method 1, 5, 6, 41, 44, 45, 61, 150, 182, 184, 187, 188, 189, 190, 192, 215, 223, 238, 239, 267, 268, 275 scientific publication 333, 353, 357, 358, 359, 362, 365, 367, 387, 388 scientific realism 54, 56, 57, 58, 83 scientific revolution 31, 37, 38, 50, 51, 52 scientific theory 23, 43, 48, 50, 171, 176, 191 scientist 3, 4, 5, 20, 21, 29, 34, 35, 39, 42, 46, 47, 51, 53, 57, 58, 66, 76, 84, 124, 147, 150, 152, 153, 154, 155, 156, 158, 183, 221, 264, 265, 267, 268, 270, 274, 275, 279, 292, 306, 307, 310, 316, 317, 321, 323, 327, 358, 412, 425, 440 second law of thermodynamics 447 secrecy of correspondence 429

Index of Terms

security 38, 155, 271, 287, 303, 317, 338, 341, 363, 395, 418, 427, 429, 430, 434, 437, 441 seismology 39, 453 self-confidence 249 self-education 427 self-expression 399 self-plagiarism 356, 357, 387 self-promotion 273, 311, 348, 426 semantic modelling 66, 68 semantic naturalism 56 sensor 73, 92, 99, 103, 131, 132, 142, 214 serendipity 156 significance 5, 30, 43, 58, 141, 177, 180, 184, 185, 224, 254, 260, 267, 268, 278, 282, 283, 315, 318, 319, 325, 336, 337, 340, 345, 353, 358, 368, 369, 374, 389, 393, 396, 440 simplicity 10, 69, 70, 71, 163, 198, 203, 208, 263, 267 simplification 13 sinus cardinalis 162 slippery slope 293 Slow Science movement 438 smoothing approximation 77 social ethics 227 social philosophy 19 social sciences 2, 4, 6, 45, 56, 106, 226, 373 social values 56, 264, 265, 269, 413, 418 sociology 2, 4, 46, 76, 106, 135, 149, 239, 240, 241, 260, 321, 322, 338, 426 soft determinism 234 soft inheritance 33, 35 spam 137, 409, 430 sparse concept 225 special theory of relativity 37, 123, 448 spectrometry 217 spectrophotometry 217 standard model 37, 193, 450 standard uncertainty 208, 209, 217 Stanford Prison Experiment 273, 274 statistical significance 177, 278, 325 Stefan’s law 447 step function 214 sterilisation 327, 459 stipulative definition 12 stoicism 249 stoics 235, 249, 250, 251 string theory 192, 193 strongly defined measurement 106 structural identification 68, 73, 85, 163, 198

501

stupid systems 310, 311 substantiation 2, 165, 166, 176, 399 support vector machine 77, 140 symmetrical dilemma 285, 286 syndrome “publish or perish” 312, 366 synthetic data 224, 325 system of standards 95 system under measurement 92, 99, 100, 107, 212, 213, 215, 341 system under modelling 66 systematic error of measurement 94 systematic measurement error 94 systems theory 2, 6 tautology 13 taxonomy 3, 33, 215, 238 technical guidelines 429, 431 technocratic approach 420, 437 technoethics 417 technology v, 1, 2, 4, 19, 21, 25, 26, 35, 39, 81, 88, 118, 130, 146, 152, 173, 202, 203, 218 245, 266, 277, 303, 304, 319, 378, 389, 403, 415, 417, 418, 420, 422, 435, 436, 437 technoscience 1, 2, 9, 10, 25, 37, 39, 66, 68, 74, 77, 89, 115, 118, 149, 150, 156, 193, 202, 219, 222, 263, 264, 265, 266, 268, 272, 274, 275, 276, 305, 307, 311, 312, 316, 338, 345, 364, 367, 372, 374, 375, 376, 380, 385, 399, 409, 415, 435, 436, 437, 439, 440, 442 temperance 247, 248, 251 temptation 216, 268, 310, 434, 438 tertium non datur 13 theft of data 332, 334, 335 theory of evolution 24, 41, 52, 122, 456 theory of inheritance 33 theory-based approach 289 theory-ladenness of observation 159 thermodynamics 35, 447, 448 time and space 37, 46, 53, 68, 238 tobacco industry 320 topology 33 trade name 377 trademark 377, 412 transducer 74, 92, 99, 112 transparency 263, 409 transposition 13 trimming data 333 Trolley Dilemma 284

502

Index of Terms

trust 180, 192, 204, 270, 271, 273, 274, 275, 276, 287, 337, 348, 364, 417, 428 truth 5, 18, 19, 20, 21, 22, 38, 41, 47, 51, 56, 57, 58, 84, 85, 165, 200, 222, 228, 231, 238, 247, 248, 250, 251, 255, 257, 258, 266, 267, 268, 271, 276, 277, 305, 306, 314, 320, 351, 433, 441 truthfulness 20, 223, 249, 255, 275 type A evaluation 211 type B evaluation 212 uncertainty of measurement 94, 217, 300, 321 unconstrained optimisation 298 undecidable statement 195, 196 underdetermination of theories by evidence 171 unification scheme 144 uniform distribution 210 universal generalisation 15, 16 universal gravitation 32, 120, 127, 171, 445 universal instantiation 15, 16, 17, 120 universal law 48, 124, 125, 254 universal quantifier 15 universalism 269 usability 277, 424 utilitarian values 264, 267, 277 utilitarianism 219, 232, 233, 256, 257, 259, 276, 287, 292, 299, 399 utility 79, 230, 253, 255, 256, 257, 276, 314, 391, 407 utility model 377

validation 77, 165, 166, 176, 218, 326, 337, 355 values of pleasure 257 values of the holy 257 values of the mind 257 values of utility 257 values of vitality 257 vanity 311 veil of ignorance 259 veracity 23, 60, 84, 166, 178, 195, 229, 239, 335, 343, 344, 352, 354, 424 verification 42, 47, 49, 86, 152, 165, 166, 167, 176, 177, 218, 326, 335, 354, 363, 425, 443 verification principle of meaning 166 virtue 20, 49, 133, 219, 225, 227, 230, 231, 246, 247, 248, 250, 251, 263, 267, 292, 299, 311, 426 virtue ethics 230, 231, 248, 292 virtue reliabilist 231 virtue responsibilist 231 Volterra series 74 wavelet network 140 weakly defined measurement 106 Weltethos 261 white-box model 71, 79, 138 Whittaker-Shannon formula 162, 170 wisdom 11, 18, 230, 245, 246, 247, 248, 249, 250, 267, 344, 375 wittiness 249 work safety 341 zoology 28, 33, 152, 456