Bottom-up processing refers to an approach to cognition whereby incoming stimuli are interpreted solely by their basic elements, without the use of prior knowledge or contextualization to make meaning.
It is the opposite of top-down processing, which utilizes prior knowledge and expectations to ‘fill in the gaps’ and infer meaning of our environments (Goldstein, 2019).
Bottom-up processing of information can help us keep an open-minded interpretation of data, unaffected by confirmation bias, and allows for close scrutiny of data.
The cognitive process starts from the stimulus and works its way up. You’re essentially processing details about what you’re currently experiencing, without connecting it to your past experiences or future predictions.
|Top-Down Processing||Bottom-Up Processing|
|Definition||“[Top-down processing] depends on the context and higher level knowledge, in addition to relying on the input of sensory stimuli” (Macmillan, 2012).||“[Bottom-up processing] is defined as processing, where input from the environment is used to build a more significant perception or understanding” (Macmillan, 2012).|
|Example in Visual Processing||If you see a vague, ambiguous image, but you expect it to be a dog, you will likely interpret it as a dog. (See more examples of top-down processing here)||You first notice individual elements of a scene (like colors, lines, shadows), which then come together to form a meaningful image.|
|Strengths||Helps us to develop quick understandings of concepts based on past experiences.||Less likely to produce errors based on assumptions or predictions, as it is based strictly on sensory input. However, it might miss the ‘bigger picture’.|
|Weaknesses||While it can be quick, it can also lead to errors if predictions or assumptions are incorrect.||Extremely time-consuming and often unrealistic.|
Bottom-up Processing Examples
1. Solving a Jigsaw Puzzle
When you look at a jigsaw puzzle piece, bottom-up processing will involve focusing explicitly on the details of the pieces – the colors, the shape, and the patterns. You’ll then try to find other pieces that it could fit into. A top-down processing method, on the other hand, will likely use a picture of what the piece should finally look like, allowing you to narrow-down exactly where the piece should go, based upon a pre-conceived idea of the final image.
2. Reading a Book
When you are reading a text for the first time, especially as a new reader, you tend to use bottom-up processing. This process involves focusing on the individual words and grammar rules, rather than scanning the full sentence and predicting meaning based on marker cues. Once you identify each and every word, in order, your brain starts to piece together the meaning of the sentence (Rayner & Reichle, 2010). For instance, in reading the sentence “Cats love their humans more than their fishy treats”, you would initially process each word individually (‘Cats’, ‘love’, ‘their’, ‘humans’, ‘more’, ‘than’, ‘their’, ‘fishy’, ‘treats’).
3. Listening to a New Language
Imagine you are listening to a conversation in a foreign language, such as Spanish. With no prior knowledge of Spanish, you perceive speech sounds (phonemes), the basic units of sound in the language. Your understanding wouldn’t be based on your previous knowledge or expectations of Spanish or ability to predict the next words, but rather on the simple, raw auditory information you’re receiving (Golestani, Rosen, & Scott, 2009). Your cognitive function in this scenario is purely a function and result of the sensory input you’re receiving.
4. Identifying an Unfamiliar Object
Suppose you are hiking and you find an unusual object you have never seen before. Bottom-up processing would start with the visual input of the object. You might look at its shape, its color, and any other detail unique to the object. Then, you start to integrate these features together to form a coherent image. This process is primarily data-driven since your knowledge and understanding are directed by the incoming data alone, as opposed to your pre-existing knowledge (Firestone & Scholl, 2016). However, if you expect to see a bear, and then see the shape on the horizon, your top-down processing brain might decide it’s a bear, because that’s what your brain expects to see.
5. Deciphering a Word from Scrambled Letters
When encountering a series of scrambled letters like “EVTNAE”, you would start by looking at the individual letters and their arrangement. It is through analyzing each individual letter and its position in relation to the others that you might connect specific letters together to form a coherent word. Using bottom-up processing, you may finally decipher the word “EVENT” (Anderson, Fincham, Schneider, & Yang, 2012). The cognitive process is exclusively reliant on the raw sensory input and doesn’t predict the word based on prior experience or knowledge.
6. The Coke vs Pepsi Test
In a famous social experiment, a YouTuber blindfolded people and asked them to sip two colas – one was Coke, the other Pepsi. They had to try to guess right which was Coke and which was Pepsi. In this test, because the people were blindfolded, they couldn’t rely on top-down processing to detect tastes (i.e. if they drank out of the Coke can, they might be able to say ‘oh yeah, see, I can taste the extra sweetness in this Coke). Instead, they had to focus on the base sensory experiences to build-up their determination of which is which.
7. Recognizing a Face in a Crowd
With bottom-up processing, the identification of a familiar face in a crowd starts with the perception of individual elements of that face: the eyes, the nose, the mouth, and any other unique features. Your initial recognition isn’t dependent on a familiar feeling or any inherent expectation, but rather on the detection of their individual facial features amid a sea of faces (Sormaz, Watson, Smith, Young, & Andrews, 2013).
8. Detecting a Smell from Cooking Food
As you enter a room where someone is cooking, you might detect a range of individual aromas. You’ll recognize each smell separately: the aromatic garlic, the tangy citrus, the spicy pepper. Processing those smells individually without automatically putting them into the context of a known dish is bottom-up processing in action (Stevenson, 2010). You’re breaking down sensory information into its fundamental parts in order to process it. By contrast, if you smell oregano and instantly think “Pizza!”, you’re using top-down processing, where you fill-in the other smells and make an assumption about what’s being cooked based on prior knowledge.
9. Hearing a Specific Note in Music
When listening to a new musical piece, you might focus on the pitch, duration, loudness, and timbre of individual notes. Deconstructing a whole piece into individual sounds without invoking the melody or rhythm that you expect involves bottom-up processing. For instance, hearing a B# note and recognizing it as an individually distinct sound from the composition exhibits the bottom-up mechanism in play (Krumhansl, 2010).
10. Identifying Color in a Painting
When examining a painting, you would begin noticing individual colors before seeing the entire image. You would identify basic colors like blue, red, yellow, or green separately. The whole image comes into being from these small, individual parts – this is a prototypical example of bottom-up processing (Fairhall & Ishai, 2017). For instance, before recognizing a Van Gogh’s “Starry Night”, you might simply see the distinct blue background and swirling yellows.
11. Recognizing a Voice in Noise
Amidst a noisy cocktail party, you might distinctly recognize a familiar voice. This happens through bottom-up processing where auditory receptors single out individual sounds amidst environmental noise, and only then do you acknowledge the familiar voice (Best, Marrone, Mason, & Kidd, 2018). For instance, recognizing your friend’s voice amidst the murmur of party chatter demonstrates this cognitive process.
12. Locating a Constellation in the Night Sky
When stargazing, you might find familiar constellations by first identifying individual stars and then piecing them together into recognizable patterns. Bottom-up processing helps you to see individual stars first and their connections to each other, and then to integrate that information into the form of a constellation. For example, before recognizing The Big Dipper, you might start by identifying individual stars like Alioth or Mizar.
13. Seeing a Pattern in Wallpaper
While looking at an intricately patterned wallpaper, you would initially notice the subtle variations in color, shape, and contrast. Each of these individual elements form the basis of your deciphering the pattern via bottom-up processing. Prior to recognizing a recurring floral pattern, you might at first observe individual flowers depicted in the design.
14. Tasting Sweetness in a Fruit
Each bite of a fresh strawberry engages your taste receptors to identify basic taste qualities: sweet, sour, bitter, salty, and umami. Initially acknowledging the natural sweetness without simultaneously identifying the fruit is enabled by bottom-up processing. You’re recognizing discrete sensory information before ascribing it to a familiar experience – in this case, the taste of a strawberry (Spence, 2015).
15. Identifying Individual Instruments in an Orchestra
While listening to a symphony, you might single out the melodious trill of a flute or the rich resonance of a cello. You pick out these individual instruments from the medley of music through bottom-up processing – focusing on one small component of the total sensory input at a time (Krumhansl, 2010). For instance, amidst a symphony, you discern the soft rhythm of the bass drum before recognizing its part in the overall composition.
16. Spotting a Bird in a Tree
Noticing a small finch sitting on a cherry tree manifests bottom-up processing. Your eyes first register the individual elements of color and shape distinct to the bird before identifying it as a finch amid the complex backdrop of the tree. The recognition is based on the raw visual input from the sight of the bird (Firestone & Scholl, 2016). For instance, you might recognize the striking yellow plumage of a finch amid a green foliaged tree.
17. Locating a Book in a Library
In a library, you could find a specific book by first identifying sections marked by genres, then finding the shelf tagged with the author’s name, and finally locating the book by its title on the book spine. This step-by-step organization of sensory data to reach your goal demonstrates bottom-up processing (Anderson, Fincham, Schneider, & Yang, 2012). For example, before finding “Moby Dick” by Herman Melville, you would first find the Classics section, then the shelf labeled “Melville.” If, on the other hand, you’re familiar with the library and have an intuitive sense of where a book might be, you’d be relying on top-down processing, using your prior knowledge and experience to guide your behaviors.
18. Discerning Different Voices in a Conversation
When listening to a radio talk show with multiple speakers, you would distinguish separate voices from the multitude. This separation of voices, each with their unique tone and pitch, is a classic example of bottom-up processing (Best, Marrone, Mason, & Kidd, 2018). Initially, you could identify the soothing timbre of the host’s voice amidst the cacophony of collective discussions.
19. Academic Research
As an academic researcher, you let the stimuli and data shape your interpretations, and try to avoid allowing prior knowledge or experience to bias your interpretations. So, bottom-up processing is essential for academics, and can help them to come to surprising conclusions, ‘guided by the data’. This is perhaps best exemplified in grounded theory research, where not even a hypothesis is presented – rather, you look at the data and let the data help you to generate a theory over time.
According to cognitive psychology, bottom-up processing is essential in everyday life to come to well-reasoned empirical conclusions. However, it has its weaknesses – namely, it’s slow and laborious. So, we also use top-down processing on a regular basis to make predictions and speed up our thinking. In such instances, we’re sacrificing precision for speed.
Anderson, J. R., Fincham, J. M., Schneider, D. W., & Yang, J. (2012). Using brain imaging to track problem solving in a complex state space. Neuroimage, 60(1), 633-643. doi: https://doi.org/10.1016/j.neuroimage.2011.12.025
Best, V., Marrone, N., Mason, C. R., & Kidd, G. (2012). The influence of non-spatial factors on measures of spatial release from masking. The Journal of the Acoustical Society of America, 131(4), 3103-3110. doi: https://doi.org/10.1121/1.3693656
Fairhall, S. L., & Ishai, A. (2007). Effective connectivity within the distributed cortical network for face perception. Cerebral cortex, 17(10), 2400-2406. doi: https://doi.org/10.1093/cercor/bhl148
Firestone, C., & Scholl, B. J. (2016). Cognition does not affect perception: Evaluating the evidence for “top-down” effects. Behavioral and brain sciences, 39, e229. doi: https://doi.org/10.1017/S0140525X15000965
Goldstein, B. (2019). Cognitive psychology: Connecting mind, research, and everyday experience. Australia: Cengage Learning.
Goldstein, E. B. (2019). Sensation and Perception. London: Cengage.
Golestani, N., Rosen, S., & Scott, S. K. (2009). Native-language benefit for understanding speech-in-noise: The contribution of semantics. Bilingualism: Language and Cognition, 12(3), 385-392. doi: https://doi.org/10.1017/S1366728909990150
Krumhansl, C. L. (2010). Plink: “Thin slices” of music. Music Perception: An Interdisciplinary Journal, 27(5), 337-354. doi: https://doi.org/10.1525/mp.2010.27.5.337
Macmillan, N. A. (2012). Detection theory: A user’s guide. Cambridge: Cambridge University Press.
Rayner, K., & Reichle, E. D. (2010). Models of the reading process. Wiley Interdisciplinary Reviews: Cognitive Science, 1(6), 787-799. doi: https://doi.org/10.1002/wcs.68
Sormaz, M., Watson, D. M., Smith, W. A. P., Young, A. W., & Andrews, T. J. (2013). Genetic contributions to face recognition ability and its component processes. Psychological Science, 31(10), 1422-1431.
Spence, C. (2015). Multisensory flavor perception. Cell, 161(1), 24-35.
Stevenson, R. J. (2010). An initial evaluation of the functions of human olfaction. Chemical Senses, 35(1), 3-20.
Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]