Communication thru smell / Pheromones
http://pubs.acs.org/cen/critter/plantsbugs.html
Plants to Bugs: Buzz Off!,Plants Use Volatile Signaling Compounds to Fend Off Attack and Possibly Warn Nearby Plants
by Sophie Wilkinson,Chemical & Engineering News, June 30, 2001.
Plants may seem passive in the face of an attack by insects, but they aren't. In fact, plants can marshal elegant defenses in order to do battle with their enemies. And they just might be able to inform their neighbors that they're in danger.
http://discovermagazine.com/2002/apr/featplants
Talking Plants: Plants have more than thorns and thistles to protect themselves—they can cry for help
by Sharman Apt Russell, Discover Magazine, 04.01.2002
Baldwin, a biologist and the director of the Molecular Ecology Department at the Max Planck Institute for Chemical Ecology in Jena, Germany, has stationed his equipment here to launch a new study of how plants defend themselves—a question he has pursued for 20 years. He and his colleagues are using chemical sensors to investigate plant communications: cries for help, invitations, even warnings, each in the form of odor molecules that float past human noses unnoticed. The harder biologists look for these signals, the more they find. They have already discovered that plants can send chemical cues to repel insect enemies, as well as signals that attract allies—other insects that are pleased to eat the insects eating the plant. But that is only the start of a more complex scenario, for Baldwin and others have also found that nearby plants can listen in to this conversation and gear up their own defenses.
Communication thru behaviour
http://www.sciencedirect.com
Lateralization of Olfaction in the Honeybee Apis mellifera
Pinar Letzkus, Willi A. Ribi, Jeff T. Wood, Hong Zhu, Shao-Wu Zhang and Mandyam V. Srinivasan;
Current Biology, Vol.16 (14) 25 July 2006, pp.1471-1476
Summary: Lateralization of function is a well-known phenomenon in humans. The two hemispheres of the human brain are functionally specialized such that certain cognitive skills, such as language or musical ability, conspecific recognition, and even emotional responses, are mediated by one hemisphere more than the other. Studies over the past 30 years suggest that lateralization occurs in other vertebrate species as well. In general, lateralization is observed in different sensory modalities in humans as well as vertebrates, and there are interesting parallels. However, little is known about functional asymmetry in invertebrates and there is only one investigation in insects. Here we show, for the first time, that the honeybee Apis mellifera displays a clear laterality in responding to learned odors. By training honeybees on two different versions of the well-known proboscis extension reflex, we demonstrate that bees respond to odors better when they are trained through their right antenna. To our knowledge, this is the first demonstration of asymmetrical learning performance in an insect.
http://www.sciencedirect.com
Do honeybees have two discrete dances to advertise food sources?
by Kathryn E. Gardner, Thomas D. Seeley and Nicholas W. Calderone; Animal Behaviour, 17 Sept 2007
The honeybee, Apis mellifera, dance language, used to communicate the location of profitable food resources, is one of the most versatile forms of nonprimate communication. Karl von Frisch described this communication system in terms of two distinct dances: (1) the round dance, which indicates the presence of a desirable food source close to the hive but does not provide information about its direction and (2) the waggle dance, which indicates the presence of a desirable food source more than 100 m from the hive and its provides information about both its distance and its direction. The view that honeybees have two discrete recruitment dances has been widely accepted since its inception in the 1920s. However, there are few detailed examinations of the behavioural parameters of dances over the range of food-source distances represented by round dances and waggle dances. Here, we show that both the round dance and the waggle dance convey information about distance and direction and that there is no clear switch between the two. We conclude that it is most meaningful to view the round and waggle dances as the ends of a continuum and that honeybees have just one adjustable recruitment signal: the waggle dance.
http://www.sciencedirect.com/Social Learning in Insects — From Miniature Brains to Consensus Building by Ellouise Leadbeater and Lars Chittka; Current Biology, Vol. 17 (16), 21 Aug 2007, pp. R703-R713Communication and learning from each other are part of the success of insect societies. Here, we review a spectrum of social information usage in insects — from inadvertently provided cues to signals shaped by selection specifically for information transfer. We pinpoint the sensory modalities involved and, in some cases, quantify the adaptive benefits. Well substantiated cases of social learning among the insects include learning about predation threat and floral rewards, the transfer of route information using a symbolic ‘language’ (the honeybee dance) and the rapid spread of chemosensory preferences through honeybee colonies via classical conditioning procedures. More controversial examples include the acquisition of motor memories by observation, teaching in ants and behavioural traditions in honeybees. In many cases, simple mechanistic explanations can de identified for such complex behaviour patterns.
Communication by bodylanguage
http://journals.royalsociety.org/content/lxv587332871j3g7/
Rapid facial mimicry in orangutan play.;
by Marina Davila Ross, Susanne Menzler, Elke Zimmermann, Biology Letters, Vol 4, No 1 / Feb. 23, 2008, pp. 27-30, DOI 10.1098/rsbl.2007.0535,
Abstract: Emotional contagion enables individuals to experience emotions of others. This important empathic phenomenon is closely linked to facial mimicry, where facial displays evoke the same facial expressions in social partners. In humans, facial mimicry can be voluntary or involuntary, whereby its latter mode can be processed as rapid as within or at 1s. Thus far, studies have not provided evidence of rapid involuntary facial mimicry in animals. This study assessed whether rapid involuntary facial mimicry is present in orangutans (Pongo pygmaeus; N=25) for their open-mouth faces (OMFs) during everyday dyadic play. Results clearly indicated that orangutans rapidly mimicked OMFs of their playmates within or at 1s. Our study revealed the first evidence on rapid involuntary facial mimicry in non-human mammals. This finding suggests that fundamental building blocks of positive emotional contagion and empathy that link to rapid involuntary facial mimicry in humans have homologues in non-human primates. Keywords: orangutan, rapid facial mimicry, involuntary responses, emotional contagion, empathy.
Vocal Communication
King's Psychology Network: Animal Learning, Language, and Cognition, Current Research Projects http://www.psyking.net/id31.htm
Jane Goodall Institute; http://www.janegoodall.org/
Sue Savage-Rumbaugh at the Language Research Center - GA State University; http://www.gsu.edu/~wwwlrc
Lynn Miles and the Chantek Foundation; http://www.chantek.org/
Rob Shumaker and the Orang-utan Language Project at the National Zoo; http://natzoo.si.edu/News/shumaker.htm
Francine "Penny" Patterson, The Gorilla Foundation and Project Koko; http://www.koko.org/
Roger and Deborah Fouts and the Chimpanzee and Human Communication Institute (CHCI) at Central Washington University; http://www.cwu.edu/~cwuchci/welcome.html
Matsuzawa Tetsuro and the Primate Research Institute, Kyoto University, Japan; http://www.pri.kyoto-u.ac.jp/index.html, http://www.pri.kyoto-u-ac.jp/ai/index-E.htm
Irene Pepperberg and the Alex Foundation; http://www.alexfoundation.org/
The N'Kisi Project; http://www.sheldrake.org/nkisi
The Language and Culture of Crows; http://www.crows.net/
Dr. Ken Marten and Project Delphis - Dolphin Cognition Research; http://www.earthtrust.org/delphis.html
John C. Lilly and Interspecies Communication Between Man and Dolphin; http://deoxy.org/lilly.htm
Rupert Sheldrake; http://www.sheldrake.org/ Stimulus control and auditory discrimination learning sets in the bottlenose dolphin.Herman LM, Arbeit WR. J Exp Anal Behav. 1973 May;19(3):379-394.
The learning efficiency of an Atlantic bottlenose dolphin was evaluated using auditory discrimination learning-set tasks. Efficiency, as measured by the probability of a correct response on Trial 2 of a new discrete-trial, two-choice auditory discrimination problem, reached levels comparable to those attained by advanced species of nonhuman primates. Runs of errorless problems in some cases rivaled those reported for individual rhesus monkeys in visual discrimination learning-set tasks. This level of stimulus control of responses to new auditory discriminanda was attained through (a) the development of a sequential within-trial method for presentation of a pair of auditory discriminanda; (b) the extensive use of fading methods to train initial discriminations, followed by the fadeout of the use of fading; (c) the development of listening behavior through control of the animal's responses during projection of the auditory discriminanda; and (d) the use of highly discriminable auditory stimuli, by applying results of a parametric evaluation of discriminability of selected acoustic variables. Learning efficiency was tested using a cueing method on Trial 1 of each new discrimination, to allow the animal to identify the positive stimulus before its response. Efficiency was also tested with the more common blind baiting method, in which the Trial 1 response was reinforced on only a random half of the problems. Efficiency was high for both methods. The overall results were generally in keeping with exceptations of learning capacity based on the large size and high degree of cortical complexity of the brain of the bottlenose dolphin.
PMID: 16811670 [PubMed - as supplied by publisher]
Auditory delayed matching in the bottlenose dolphin.Herman LM, Gordon JA. J Exp Anal Behav. 1974 Jan;21(1):19-26.
A bottlenose dolphin, already highly proficient in two-choice auditory discriminations, was trained over a nine-day period on auditory delayed matching-to-sample and then tested on 346 unique matching problems, as a function of the delay between the sample and test sounds. Each problem used new sounds and was from five to 10 trials long, with the same sound used as the sample for all trials of a problem. At each trial, the sample was projected underwater for 2.5 sec, followed by a delay and then by a sequence of two 2.5-sec duration test sounds. One of the test sounds matched the sample and was randomly first or second in the sequence, and randomly appeared at either a left or right speaker. Responses to the locus of the matching test sound were reinforced. Over nine, varying-sized blocks of problems, the longest delay of a set of delays in a block was progressively increased from 15 sec initially to a final value of 120 sec. There was a progressive increase across the early blocks in the percentage of correct Trial 1 responses. A ceiling-level of 100% correct responses was then attained over the final six blocks, during which there were 169 successive Trial 1 responses bracketed by two Trial 1 errors (at 24- and 120-sec delays). Performance on trials beyond the first followed a similar trend. Finally, when the sample duration was decreased to 0.2 sec or less, matching performance on Trial 1 of new problems dropped to chance levels.
PMID: 4204143 [PubMed - indexed for MEDLINE]
Discrimination of auditory temporal differences by the bottlenose dolphin and by the human.Yunker MP, Herman LM. J Acoust Soc Am. 1974 Dec;56(6):1870-5.
PMID: 4443487 [PubMed - indexed for MEDLINE]
Underwater frequency discrimination in the bottlenosed dolphin (1-140 kHz) and the human (1-8 kHz).Thompson RK, Herman LM. J Acoust Soc Am. 1975 Apr;57(4):943-8
PMID: 1133262 [PubMed - indexed for MEDLINE]
Bottle-nosed dolphin: double-slit pupil yields equivalent aerial and underwater diurnal acuity.Herman LM, Peacock MF, Yunker MP, Madsen CJ. Science. 1975 Aug 22;189(4203):650-2.
In bright daylight, and at best viewing distances, the bottlenosed dolphin resolves visual gratings approximately equally well in air and in water. Aerial resolution improves with increased viewing distance, while underwater resolution improves with decreased viewing distance. The double-slit pipil overcomes the gross myopia in air measured by ophthalmoscope and produces the indicated effects of viewing distance.
PMID: 1162351 [PubMed - indexed for MEDLINE]
Memory for lists of sounds by the bottle-nosed dolphin: convergence of memory processes with humans?Thompson RK, Herman LM. Science. 1977 Feb 4;195(4277):501-3.
After listening to a list of as many as six discriminably different 2-second sounds, a bottle-nosed dolphin classified a subsequent probe sound as either "old" (from the list) or "new." The probability of recognizing an old probe was close to 1.0 if it matched the most recent sound in the list and decreased sigmoidally for successively earlier list sounds. Memory span was estimated to be at least four sounds. Overall probabilities of correctly classifying old and new probes corresponded closely, as if recognition decisions were made according to an optimum maximum likelihood criterion. The data bore many similarities to data obtained from humans tested on probe recognition tasks.
PMID: 835012 [PubMed - indexed for MEDLINE]
Comprehension of sentences by bottlenosed dolphins.Herman LM, Richards DG, Wolz JP. Cognition. 1984 Mar;16(2):129-219.
PMID: 6540652 [PubMed - indexed for MEDLINE]
Vocal mimicry of computer-generated sounds and vocal labeling of objects by a bottlenosed dolphin, Tursiops truncatus.Richards DG, Wolz JP, Herman LM. J Comp Psychol. 1984 Mar;98(1):10-28.
A bottlenosed dolphin (Tursiops truncatus) was trained to mimic computer-generated "model" sounds, using a whistle mode of vocalization. Prior to training, the whistle sounds of this dolphin were limited to a few stereotyped forms, none of which resembled the model sounds. After training, high-fidelity imitations were obtained of model sounds having (a) moderately or widely swept, slow-rate frequency modulation (1-2 Hz), (b) narrowly or moderately swept frequency modulation at moderate to rapid rates (3-11 Hz), (c) square-wave frequency transitions, and (d) unmodulated (pure-tone) waveforms. New models, not heard previously, could be mimicked immediately, often with good fidelity, including mimicry of amplitude variation that had not been explicitly reinforced during training. Subsets of familiar models were mimicked with high reliability in repeated tests. In additional training, control of the mimic response was transferred from the acoustic model to objects shown the dolphin (e.g., a ball or a hoop) so that, in effect, the dolphin gave unique vocal labels to those objects. In a test of accuracy and reliability of labeling, correct vocal labels were given on 91% of 167 trials comprised of five different objects presented in random order. The dolphin's ability for vocal mimicry compared favorably with that of the more versatile mimic birds, and it contrasted sharply with the apparent lack of vocal mimicry ability in terrestrial mammals other than humans. The ability to label objects vocally was similar to abilities shown for some birds and similar, in principle, to abilities of great apes trained in visual languages to label objects through gestures or other visual symbols.
PMID: 6705501 [PubMed - indexed for MEDLINE]
Reporting presence or absence of named objects by a language-trained dolphin.Herman LM, Forestell PH. Neurosci Biobehav Rev. 1985 Winter;9(4):667-81.
Referential "reporting" was defined as the transmission of information about the presence or absence of symbolically-referenced real-world objects. In Experiment 1 two bottlenosed dolphins (Tursiops truncatus), trained in earlier studies to carry out instructions conveyed by imperative sentences expressed in artificial gestural or acoustic languages, each gave spontaneous indications that an object referenced in an imperative was absent from their tank. In Experiment 2 the dolphin tutored in the gestural language was taught to make explicit reports of object absence by pressing a "No" paddle in response to imperatives referencing an absent object. Absence was reported correctly on 84% of 97 missing-object probes inserted at random intervals among 598 sentences referring to objects that were present. Reports were typically made after active search of the tank for an average of 15.0 sec. False reports, that objects present were absent, were few (7.5%). In Experiment 3, the dolphin was taught an interrogative sentence form that enabled us to ask direct questions about the presence or absence of specific objects. Responses by the dolphin on the No paddle indicated absence, while responses on a "Yes" paddle indicated presence. From one to three objects were shown the dolphin and then placed in the tank in a discrete-trial procedure. In response to the interrogative, reports or object presence or absence were better than 91% correct with a single object in the tank and either that object or some other object referenced; accuracy declined to 72-78% correct with three objects present, but was still well above chance. Several lines of evidence suggested that the dolphin was attempting to remember which objects it had been shown, rather than conducting an active environmental search as in Experiment 2. The memory strategy became less efficient as the number of objects to be remembered increased. Overall, the results evidenced the language-trained dolphin's understanding of references to present or absent objects, its ability to inventory its environment to seek information about those objects, and its ability to report its obtained knowledge to others.
PMID: 4080284 [PubMed - indexed for MEDLINE]
Bottlenosed dolphin and human recognition of veridical and degraded video displays of an artificial gestural language.Herman LM, Morrel-Samuels P, Pack AA. J Exp Psychol Gen. 1990 Jun;119(2):215-30.
Kewalo Basin Marine Mammal Laboratory, University of Hawaii, Honolulu 96814.
2 bottlenosed dolphins proficient in interpreting gesture language signs viewed veridical and degraded gestures via TV without explicit training. In Exp. 1, dolphins immediately understood most gestures: Performance was high throughout degradations successively obscuring the head, torso, arms, and fingers, though deficits occurred for gestures degraded to a point-light display (PLD) of the signer's hands. In Exp. 2, humans of varying gestural fluency saw the PLD and veridical gestures from Exp. 1. Again, performance declined in the PLD condition. Though the dolphin recognized gestures as accurately as fluent humans, effects of the gesture's formational properties were not identical for humans and dolphin. Results suggest that the dolphin uses a network of semantic and gestural representations, that bottom-up processing predominates when the dolphin's short-term memory is taxed, and that recognition is affected by variables germane to grammatical category, short-term memory, and visual perception.
PMID: 2141354 [PubMed - indexed for MEDLINE]
Responses to anomalous gestural sequences by a language-trained dolphin: evidence for processing of semantic relations and syntactic information.Herman LM, Kuczaj SA 2nd, Holder MD. J Exp Psychol Gen. 1993 Jun;122(2):184-94.
Department of Psychology, University of Hawaii, Manoa, Honolulu 96814.
This study examined the responses of a bottlenosed dolphin (Tursiops truncatus) to "normal" (semantically and syntactically correct) sequences of gestures and to anomalous sequences given within an artificial gestural language highly familiar to the animal. Anomalous sequences violated the semantic rules or syntactic constraints of the language. The dolphin discriminated anomalous from normal sequences in that rejections (refusals to respond) occurred to some anomalous sequences but never to normal sequences. Rejections rarely occurred, however, if the anomalous sequence contained a subset of gestures that would comprise a normal unit if joined together. Such units were typically perceived by the dolphin and responded to even if they consisted of gestures that were not sequentially adjacent. All semantic elements of a sequence were processed by the dolphin in relation to other elements before the dolphin organized its final response. The results show the importance of both semantic properties and semantic relations of the referents of the gestures and of syntactic (ordering) constraints in the dolphin's interpretations of the anomalies.
PMID: 8315399 [PubMed - indexed for MEDLINE]
Sensory integration in the bottlenosed dolphin: immediate recognition of complex shapes across the senses of echolocation and vision.Pack AA, Herman LM. J Acoust Soc Am. 1995 Aug;98(2 Pt 1):722-33.
Kewalo Basin Marine Mammal Laboratory, University of Hawaii, Honolulu 96814, USA.
In matching-to-sample tests, a bottlenosed dolphin (Tursiops truncatus) was found capable of immediately recognizing a variety of complexly shaped objects both within the senses of vision or echolocation and, also, across these two senses. The immediacy of recognition indicated that shape information registers directly in the dolphin's perception of objects through either vision or echolocation, and that these percepts are readily shared or integrated across the senses. Accuracy of intersensory recognition was nearly errorless regardless of whether the sample objects were presented to the echolocation sense and the alternatives to the visual sense (E-V matching) or the reverse, with samples presented to the visual sense and alternatives to the echolocation sense (V-E matching). Furthermore, during V-E matching, the dolphin was equally facile at recognition whether the sample objects exposed to vision were "live," presented in air in the real world, or were images displayed on a television screen placed behind an underwater window. Overall, the results suggested that what a dolphin "sees" through echolocation is functionally similar to what it sees through vision.
PMID: 7642811 [PubMed - indexed for MEDLINE]
Seeing through sound: dolphins (Tursiops truncatus) perceive the spatial structure of objects through echolocation.Herman LM, Pack AA, Hoffmann-Kuhnt M. J Comp Psychol. 1998 Sep;112(3):292-305.
Psychology Department, University of Hawaii, Honolulu, USA. lherman@hawaii.edu
Experiment 1 tested a dolphin (Tursiops truncatus) for cross-modal recognition of 25 unique pairings of 8 familiar, complexly shaped objects, using the senses of echolocation and vision. Cross-modal recognition was errorless or nearly so for 24 of the 25 pairings under both visual to echoic matching (V-E) and echoic to visual matching (E-V). First-trial recognition occurred for 20 pairings under V-E and for 24 under E-V. Echoic decision time under V-E averaged only 1.88 s. Experiment 2 tested 4 new pairs of objects for 24 trials of V-E and 24 trials of E-V without any prior exposure of these objects. Two pairs yielded performance significantly above chance in both V-E and E-V. Also, the dolphin matched correctly on 7 of 8 1st trials with these pairs. The results support a capacity for direct echoic perception of object shape by this species and demonstrate that prior object exposure is not required for spontaneous cross-modal recognition.
PMID: 9770316 [PubMed - indexed for MEDLINE]
Dolphins (Tursiops truncatus) comprehend the referential character of the human pointing gesture.Herman LM, Abichandani SL, Elhajj AN, Herman EY, Sanchez JL, Pack AA. J Comp Psychol. 1999 Dec;113(4):347-64.
Department of Psychology, University of Hawaii at Manoa, Honolulu, USA
The authors tested a dolphin's (Tursiops truncatus) understanding of human manual pointing gestures to 3 distal objects located to the left of, to the right of, or behind the dolphin. The human referred to an object through a direct point (Pd), a cross-body point (Px), or a familiar symbolic gesture (S). In Experiment 1, the dolphin responded correctly to 80% of Pds toward laterally placed objects but to only 40% of Pds to the object behind. Responding to objects behind improved to 88% in Experiment 2 after exaggerated pointing was briefly instituted. Spontaneous comprehension of Pxs also was demonstrated. In Experiment 3, the human produced a sequence of 2 Pds, 2 Pxs, 2 Ss, or all 2-way combinations of these 3 to direct the dolphin to take the object referenced second to the object referenced first. Accuracy ranged from 68% to 77% correct (chance = 17%). These results established that the dolphin understood the referential character of the human manual pointing gesture.
PMID: 10608559 [PubMed - indexed for MEDLINE]
Generalization of 'same-different' classification abilities in bottlenosed dolphins.Mercado E, Killebrew DA, Pack AA, Mácha IV IV, Herman LM. Behav Processes. 2000 Aug 17;50(2-3):79-94.
Kewalo Basin Marine Mammal Laboratory, 1129 Ala Moana Boulevard, 96814, Honolulu, HI, USA
Two bottlenosed dolphins taught to classify pairs of three-dimensional objects as either same or different were tested with novel stimulus sets to determine how well their classification abilities would generalize. Both dolphins were immediately able to classify novel pairs of planar objects, differing only in shape, as same or different. When tested on sets of three objects consisting of either all different objects or of two identical objects and one different object, both dolphins proved to be able to classify 'all different' sets as different and 'not all different' sets as same, at levels significantly above chance. These data suggest that dolphins can use knowledge about similarity-based classification strategies gained from previous training to perform successfully in a variety of novel same-different classification tasks. Visual classificatory abilities of dolphins appear to be comparable to those that have been demonstrated in primates.
PMID: 10969185 [PubMed - as supplied by publisher]
The object behind the echo: dolphins (Tursiops truncatus) perceive object shape globally through echolocation.Pack AA, Herman LM, Hoffmann-Kuhnt M, Branstetter BK. Behav Processes. 2002 May 28;58(1-2):1-26.
Kewalo Basin Marine Mammal Laboratory, 1129 Ala Moana Boulevard, 96814, Honolulu, HI, USA
Two experiments tested a bottlenosed dolphin's ability to match objects across echolocation and vision. Matching was tested from echolocation sample to visual alternatives (E-V) and from visual sample to echolocation alternatives (V-E). In Experiment 1, the dolphin chose a match from among three-alternative objects that differed in overall (global) shape, but shared several 'local' features with the sample. The dolphin conducted a right-to-left serial nonexhaustive search among the alternatives, stopping when a match was encountered. It matched correctly on 93% of V-E trials and on 99% of E-V trials with completely novel combinations of objects despite the presence of many overlapping features. In Experiment 2, a fourth alternative was added in the form of a paddle that the dolphin could press if it decided that none of the three-alternatives matched the sample. When a match was present, the dolphin selected it on 94% of V-E trials and 95% of E-V trials. When a match was absent, the dolphin pressed the paddle on 74% and 76%, respectively, of V-E and E-V trials. The approximate 25% error rate, which consisted of a choice of one of the three non-matching alternatives in lieu of the paddle press, increased from right to center to left alternative object, reflecting successively later times in the dolphin's search path. A weakening in memory for the sample seemed the most likely cause of this error pattern. Overall, the results gave strong support to the hypothesis that the echolocating dolphin represents an object by its global appearance rather than by local features.
PMID: 11955768 [PubMed - as supplied by publisher]
Bottlenosed dolphins (Tursiops truncatus) comprehend the referent of both static and dynamic human gazing and pointing in an object-choice task.Pack AA, Herman LM. J Comp Psychol. 2004 Jun;118(2):160-71.
Kewalo Basin Marine Mammal Laboratory, Honolulu, HI 96814, USA. pack@hawaii.edu
The authors tested 2 bottlenosed dolphins (Tursiops truncatus) for their understanding of human-directed gazing or pointing in a 2-alternative object-choice task. A dolphin watched a human informant either gazing at or pointing toward 1 of 2 laterally placed objects and was required to perform a previously indicated action to that object. Both static and dynamic gaze, as well as static and dynamic direct points and cross-body points, yielded errorless or nearly errorless performance. Gaze with the informant's torso obscured (only the head was shown) produced no performance decrement, but gaze with eyes only resulted in chance performance. The results revealed spontaneous understanding of human gaze accomplished through head orientation, with or without the human informant's eyes obscured, and demonstrated that gaze-directed cues were as effective as point-directed cues in the object-choice task.
PMID: 15250803 [PubMed - indexed for MEDLINE]
Song copying by humpback whales: themes and variations.Mercado E 3rd, Herman LM, Pack AA. Anim Cogn. 2005 Apr;8(2):93-102. Epub 2004 Oct 15.
Department of Psychology, University at Buffalo, SUNY, Park Hall, Buffalo, NY 14260, USA. emiii@buffalo.edu
Male humpback whales (Megaptera novaeangliae) produce long, structured sequences of sound underwater, commonly called "songs." Humpbacks progressively modify their songs over time in ways that suggest that individuals are copying song elements that they hear being used by other singers. Little is known about the factors that determine how whales learn from their auditory experiences. Song learning in birds is better understood and appears to be constrained by stable core attributes such as species-specific sound repertoires and song syntax. To clarify whether similar constraints exist for song learning by humpbacks, we analyzed changes over 14 years in the sounds used by humpback whales singing in Hawaiian waters. We found that although the properties of individual sounds within songs are quite variable over time, the overall distribution of certain acoustic features within the repertoire appears to be stable. In particular, our findings suggest that species-specific constraints on temporal features of song sounds determine song form, whereas spectral variability allows whales to flexibly adapt song elements.
PMID: 15490289 [PubMed - indexed for MEDLINE]
Acoustic properties of humpback whale songs.Au WW, Pack AA, Lammers MO, Herman LM, Deakos MH, Andrews K. J Acoust Soc Am. 2006 Aug;120(2):1103-10.
Marine Mammal Research Program, Hawaii Institute of Marine Biology, University of Hawaii, P.O. Box 1106, Kailua, Hawaii 96734, USA. wau@hawaii.edu
A vertical array of five hydrophones was used to measure the acoustic field in the vertical plane of singing humpback whales. Once a singer was located, two swimmers with snorkel gear were deployed to determine the orientation of the whale and position the boat so that the array could be deployed in front of the whale at a minimum standoff distance of at least 10 m. The spacing of the hydrophones was 7 m with the deepest hydrophone deployed at a depth of 35 m. An eight-channel TASCAM recorder with a bandwidth of 24 kHz was used to record the hydrophone signals. The location (distance and depth) of the singer was determined by computing the time of arrival differences between the hydrophone signals. The maximum source level varied between individual units in a song, with values between 151 and 173 dB re 1 microPa. One of the purposes of this study was to estimate potential sound exposure of nearby conspecifics. The acoustic field determined by considering the relative intensity of higher frequency harmonics in the signals indicated that the sounds are projected in the horizontal direction despite the singer being canted head downward anywhere from about 25 degrees to 90 degrees. High-frequency harmonics extended beyond 24 kHz, suggesting that humpback whales may have an upper frequency limit of hearing as high as 24 kHz.
PMID: 16938996 [PubMed - indexed for MEDLINE]
Dolphin (Tursiops truncatus) echoic angular discrimination: effects of object separation and complexity.Branstetter BK, Mevissen SJ, Pack AA, Herman LM, Roberts SR, Carsrud LK. J Acoust Soc Am. 2007 Jan;121(1):626-35.
Psychology Department, University of Hawaii, Manoa, Honolulu, Hawaii 96822-2294, USA. branstet@hawaii.edu
A bottlenose dolphin was tested on its ability to echoically discriminate horizontal angular differences between arrays of vertically oriented air-filled PVC rods. The blindfolded dolphin was required to station in a submerged hoop 2 radial m from the stimuli and indicate if an array with two rods (S+) was to the right or the left of a single rod (S-). The angular separation between the two rods (thetaw) was held constant within each experiment while the angle between the S+ and the S-stimuli (thetab) varied to produce angular differences (deltatheta= thetab-thetaw) ranging from 0.25 to 4 degrees. In experiment I, thetaw was maintained at 2 degrees and in experiment II, thetaw was maintained at 4 degrees. Resulting 75% correct thresholds (method of constant stimuli) were 1.5 and 0.7 degrees, respectively. The two main findings of this study are: (1) decreasing the number of targets does not aid in localization, and (2) increasing the space between the rods enhances localization. Taken as a whole, the experiments suggest dolphins have a well-developed ability to resolve spatial information through sonar.
PMID: 17297816 [PubMed - indexed for MEDLINE]
The dolphin's (Tursiops truncatus) understanding of human gazing and pointing: knowing what and where.Pack AA, Herman LM. J Comp Psychol. 2007 Feb;121(1):34-45.
Dolphin Institute, Honolulu, HI 96814, USA. pack@hawaii.edu
The authors tested whether the understanding by dolphins (Tursiops truncatus) of human pointing and head-gazing cues extends to knowing the identity of an indicated object as well as its location. In Experiment 1, the dolphins Phoenix and Akeakamai processed the identity of a cued object (of 2 that were present), as shown by their success in selecting a matching object from among 2 alternatives remotely located. Phoenix was errorless on first trials in this task. In Experiment 2, Phoenix reliably responded to a cued object in alternate ways, either by matching it or by acting directly on it, with each type of response signaled by a distinct gestural command given after the indicative cue. She never confused matching and acting. In Experiment 3, Akeakamai was able to process the geometry of pointing cues (but not head-gazing cues), as revealed by her errorless responses to either a proximal or distal object simultaneously present, when each object was indicated only by the angle at which the informant pointed. The overall results establish that these dolphins could identify, through indicative cues alone, what a human is attending to as well as where.
PMID: 17324073 [PubMed - indexed for MEDLINE]
Cetaceans have complex brains for complex cognition.Marino L, Connor RC, Fordyce RE, Herman LM, Hof PR, Lefebvre L, Lusseau D, McCowan B, Nimchinsky EA, Pack AA, Rendell L, Reidenberg JS, Reiss D, Uhen MD, Van der Gucht E, Whitehead H. PLoS Biol. 2007 May;5(5):e139.
Neuroscience and Behavioral Biology Program, Emory University, Atlanta, Georgia, United States of America. lmarino@emory.edu
PMID: 17503965 [PubMed - indexed for MEDLINE]