Learning to transduce

Human beings at their most rational are able to generalise from examples. If you stand under the shower head before turning on the tap then it is likely you will be dowsed with cold water before it gets to a temperature you are happy with. That if-then rule is a generalisation borne of a few shower experiences, probably at an early age. Learning a rule from examples in this way is known as “induction”, and is one of the major models philosophers use to explain empirical science. Under this model, from many observations scientists formulate and test rules, formulas or algorithms that enable them to predict what will happen when they make new but similar observations.

But in everyday life most of us bypass the formulation of such rules. We simply avoid standing under the shower before turning on the taps, a practice derived from experience, but without the mediation of a rule. Where we behave in that apparently rule-less way we are operating with transductive inference. We are also open to exceptions and other factors in the circumstances that would complicate any rule, such as air temperature, whether you are at the beach, in public or in private, have just been in a sauna, are in a hurry, etc.

The mathematician Vladimir Vapnik pioneered the use of the term “transductive inference” from a statistical perspective.

The discovery of transductive inference and its advantages over inductive inference is not just a technical achievement, but a breakthrough in the philosophy of generalization. Until now, the traditional method of inference was the inductive-deductive method, where one first defines a general rule using the available information, and then deduces the answer using this rule. That is, one goes from particular to general and then from general to particular.
In transductive mode one provides direct inference from particular to particular, avoiding the ill-posed part of the inference problem (inference from particular to general) [465].

Transductive inference assumes no intermediate process by which rules are induced, and from which cases can then be derived. OpenAI‘s ChatGPT-3 apparently deploys transductive inference in its repertoire of techniques to generate new sentences in response to input sentences. I asked it to comment on my example of the shower. I was pleased to see that it endorsed the example, and provided further elaboration. Here is how it responded to my input.

Your example illustrates both inductive and transductive inference. Inductive inference is involved in the process of learning a general rule from examples, as you describe. For example, after experiencing the phenomenon of being inundated with cold water when standing under the shower head before turning on the tap, a person might learn the general rule that it is best to avoid standing under the shower head before turning on the tap. This rule is a generalization based on past experiences, and is an example of inductive inference.
Transductive inference is also involved in this example, because the person is using their past experiences to make a prediction about a specific situation (i.e., whether it is safe to stand under the shower head before turning on the tap). In other words, the person is using their past experiences to make a prediction about the label (i.e., whether it is safe or not) of a previously unseen example (i.e., the specific situation of standing under the shower head before turning on the tap).

So, in this example, both inductive and transductive inference play a role in the person’s reasoning and decision-making. Inductive inference is involved in the process of learning a general rule from examples, while transductive inference is involved in the process of using past experiences to make a prediction about a specific situation.

The chatbot restated my example shower rule as “the general rule that it is best to avoid standing under the shower head before turning on the tap.” It also introduced the idea of a label “safe” or “not” as part of a classification schema. In the final paragraph I think ChatGPT-3 might be confusing the fact that the example features both inductive and transductive inference (trying to establish the difference between them) with the proposition that someone about to have a shower uses both inductive and transductive reasoning. I didn’t want to imply that.

I’ve referred to the response generated by an automated neural network system in previous blogs, and some of our earlier work, drawing on the research of D.E. Rumelhart and J.L. McClelland in the 1980s on language processing. I asked ChatGPT-3 if their seminal article “On learning the past tense of English verbs” adopted a transductive model.

The term “transductive learning” was not coined until later, so it is not used in Rumelhart and McClelland’s article “On learning the past tense of English verbs”. However, their model can be described as transductive because it involves learning to predict the past tense form of verbs based on a set of labeled examples of verb forms, and is able to generalize from the training examples to make predictions about the past tense form of verbs it has not seen before.

In general, transductive learning refers to a type of learning in which the learner is given a set of labeled training examples and uses them to make predictions about the labels of previously unseen examples. It is distinguished from inductive learning, which involves using a set of examples to make generalizations about the underlying patterns or rules in the data.

Transduction (and the “transduction problem”) is just one of several aspects of the current generation of AI communications tools. Sequencing (recurrrent neural networks), vector representations and encoder-decoder architectures are others — for later discussion.


  • Coyne, Richard. “Design reasoning without explanation.” AI Magazine 11, no. 4 (1990): 72-80. 
  • Coyne, Richard. “Modelling the emergence of design descriptions across schemata.” Environment and Planning B: Planning and Design 18 (1991): 427-458. 
  • Coyne , Richard, Sidney Newton, and Fay Sudweeks. “A connectionist view of creative design.” In Modeling Creativity and Knowledge-Based Creative Design, edited by John S. Gero, and Mary Lou Maher, 177-210. Hillsdale, NJ: Lawrence Erlbaum, 1993.
  • Dennett, Daniel C. Consciousness Explained. Boston, MA: Little, Brown & Co, 1991. 
  • Newton, Sidney, and Richard Coyne. “Impact of connectionist systems on design.” Knowledge-Based Systems 5, no. 1 (1992): 66-81. 
  • Rumelhart, D.E. , and J.L. McClelland. “On learning the past tense of English verbs.” In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 2: Psychology and Biological Models, edited by J. L. McClelland, and D. E. Rumelhart, 216-271. Cambridge MA: MIT Press, 1987.
  • Vapnik, Vladimir. Estimation of Dependences Based on Empirical Data. Trans. Samuel Kotz. New York: Springer Science+Business Media, 1982. 
  • Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. “Attention Is All You Need.” In 31st Conference on Neural Information Processing Systems, 1-15. Long Beach, CA, USA, 2017.


  • The shower example probably came to me after a session at my local leisure centre.
  • I recall reading how a scholar reviewing the famous shower seen in Hitchcock’s Psycho remarked how unlikely it was that Marion Crane would stand under the show before she turned on the taps.
  • Featured image is from MidJourney, prompted by “showerhead beach transducer.”
  • The dictionary definition of “to transduce” is to transfer energy or information from one medium to another, which is different to the use of the term by Vapnik.
  • As far as I can tell, Vapnik makes no reference to C.S. Peirce, but transduction fits nicely as a fourth mode of logical inference, adding to Peirce’s deduction, induction, abduction. See post: on being a detective.
  • Also see Kahneman, Daniel. Thinking, Fast and Slow. London: Penguin, 2011 and my explanation at Why experts are better than algorithms.


  1. Jon Awbrey says:

    Inference from particulars to particulars was classically known as analogy.
    Aristotle analyzed analogy as a mixed syllogism combining an inductive step with a deductive step.

    See Aristotle’s “Paradigm”

  2. Jon Awbrey says:

    As it happens, I just opened a discussion about related issues on the Academia.edu site here.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.