The world appears three-dimensional (3D) even though the depth dimension is lost with projection to the retina. The visual system uses different cues that carry information about the 3D aspects of the world, combining the information they convey. Most research is based on the following assumptions: cues specify depth information, they are processed independently, and they combine linearly in order to provide a single depth-map. I present data showing that the visual system does not rely on these assumptions. Cues are informative about only some aspects of the 3D shape of objects. Some cues specify depth, while others carry information about surface orientation, curvature or local shape. My hypothesis is that cues are combined independently for each of the 3D properties, and that the computation is not derived from any unified representation. I asked participants to make judgments about monocularly viewed computer-generated convex shapes. Participants compared two of these shapes in terms of the magnitude of one 3D property: depth, curvature, or orientation at a given point. One surface was kept constant while the shape of the other was either varied between trials or it was dynamically modified by the participants. Results indicated that even when shapes defined by either motion, texture, or shading were perceived as having the same curvature, they were not necessarily perceived as having the same depth or orientation at specified points. Three-Dimensional shapes reconstructed from the judgment of different shape properties were significantly different from one another. Since cues carry different information about these 3D properties, I conclude that they must be represented independently. Since properties estimated in single-cue stimuli are predictive of the same property in cue-combined stimuli, cue combination must be independent for each property. I propose a new approach to cue combination that accounts for all of the observed differences.