HK1165544B - 3d layering of map metadata - Google Patents
3d layering of map metadata Download PDFInfo
- Publication number
- HK1165544B HK1165544B HK12106239.4A HK12106239A HK1165544B HK 1165544 B HK1165544 B HK 1165544B HK 12106239 A HK12106239 A HK 12106239A HK 1165544 B HK1165544 B HK 1165544B
- Authority
- HK
- Hong Kong
- Prior art keywords
- map
- metadata
- view
- space
- altitude
- Prior art date
Links
Description
Technical Field
The invention relates to 3D layering of map metadata.
Background
Computer-aided map navigation tools have achieved wide acceptance. The user may find addresses or directions with map navigation tools available at various websites. Some software programs allow a user to navigate on a map, zoom in towards the ground or zoom out away from the ground, or move between different geographical locations. In automobiles, GPS devices have provided basic road navigation for many years. More recently, map navigation software for cell phones and other mobile computing devices has allowed users to zoom in, zoom out, and move around on maps that show details about geographic features, towns, cities, counties, and state locations, roads, and buildings.
Map navigation tools typically present metadata about map features as "kneading into" a planar two-dimensional ("2D") view of a map. For example, in a top-down map view, text labels are written over road or image details in place, and the text labels are effectively presented at the same ground level as the road or image details. This can lead to excessive visual complexity due to the density of information that needs to be displayed at any given level of viewing the map.
To reduce the density of detail, many map navigation tools hide or reveal metadata that depends on the view level of the map. For example, if the view is close to a small scale feature such as a building, text labels for the feature are revealed, but if the view is far from the small scale feature, text labels for the feature are hidden. On the other hand, textual labels for large-scale features such as countries or states are shown at the high-level view, but hidden at views closer to the ground level. However, at any given view level, the rendered text labels are still swiped into the planar 2D view of the map. Also, the transition between different view levels may be abrupt when one 2D view is replaced by the next 2D view showing different metadata. As a result, the observer may lose context and become disoriented during the transition.
Disclosure of Invention
Techniques and tools are described for rendering a view of a map in which map metadata elements are layered in a three-dimensional ("3D") space through which a viewer navigates. The layering of map metadata elements in 3D space facilitates smooth motion effects of zoom-in, zoom-out, and scroll operations in map navigation. In many cases, these techniques and tools help the viewer maintain context throughout transitions between different view levels, which improves the overall experience of using the map navigation tool.
In accordance with one aspect of the techniques and tools described herein, a computing device determines a viewer position associated with a view altitude in 3D space. The computing device also determines one or more geoprimitive data elements, such as text labels indicating a title, distance, or other details about a feature on the map. The map metadata element has a metadata altitude in 3D space and is associated with a feature of the map (e.g., a building, street, neighborhood, city, or state). The computing device renders a display of a map view based at least in part on a hierarchy of viewer positions and map metadata elements at different altitudes in 3D space.
Rendering a map view depends, at least in part, on how the view elevation (of the viewer position) relates to different metadata elevations of map metadata elements in the 3D space. For example, the computing device places text markers respectively over features associated with the markers in 3D space at the metadata elevations indicated for the markers. The computing device creates a map view (e.g., rendered with one or more pixels, within a threshold distance from the viewer position, unobstructed by another feature or marker) from the points of the map surface layer and the points of the placed marker visible from the viewer position. In some cases, the placed markers are parallel to the surface layer in 3D space and below the altitude of the viewer position. In other cases, some placed markers are perpendicular to the surface layer in 3D space, while other markers are parallel to the surface layer in 3D space and above the viewer position.
For navigation when the viewer position changes, the computing device repeatedly renders a map view for the viewer position, which may have different view altitudes and/or different geographic positions (e.g., by position at a surface layer) in 3D space. For metadata elements associated with different metadata altitudes in 3D space, these elements may appear to be shifted by different distances between different views in order to provide a parallax effect when considering geographic location changes between observer positions in 3D space.
When transitioning between a top-down view and a bird's-eye view of the map, rendering may include placing some of the metadata elements parallel to a surface level in the 3D space for rendering the top-down view, and placing the elements perpendicular to the surface level in the 3D space for rendering the bird's-eye view. Meanwhile, when rendering the bird's eye view, other metadata elements may be placed parallel to the surface layer and above the viewer. Between multiple aerial views of the map, the metadata elements may appear rotated and scaled in context with features in the map to account for changes between viewer positions in the 3D space. After conversion to a photo view of a particular feature (e.g., building, landmark) on the map, the rendering may present additional metadata text details about the particular feature.
To provide a smooth transition between map views and thereby facilitate viewer maintenance of context, the computing device may repeatedly determine a viewer position between an initial viewer position and a destination viewer position in 3D space and render a new view of the map at the new viewer position. In particular, this may facilitate smooth motion effects of zoom-in, zoom-out, and scroll operations in navigation. For example, to provide a smooth magnification effect when transitioning between multiple top-down views when the view altitude decreases in the 3D space, the metadata text labels appear to become larger and denser as the view altitude approaches the target altitude or distance, but appear to become larger and lighter as the view altitude continues to fall below the target altitude or distance, creating an effect in which the labels fade or disappear gradually. As another example, to provide a smooth transition between the top-down and bird's-eye views of the map, the tag appears to rotate away from the viewer as the view altitude decreases toward the tagged metadata altitude direction, and then appears to flip at the tagged metadata altitude.
In accordance with another aspect of the techniques and tools described herein, a client computing device and a server computing device exchange information to facilitate map navigation. The client computing device sends a request for map information for a map. In some scenarios, the request indicates one or more search terms. In other scenarios, the request simply indicates the viewer position associated with the view altitude in the 3D space. In response, the client computing device receives the map metadata element. Individual map metadata elements are associated with individual features of the map. When the client computing device sends the search terms, the client computing device may also receive a viewer position associated with a view altitude in the 3D space. The client computing device then renders the map view, for example as described above.
Conversely, the server computing device receives a request for map information from the client computing device, where the request indicates, for example, one or more search terms or viewer positions associated with a view altitude in 3D space. The server computing device determines one or more map metadata elements having an altitude in the 3D space, the map metadata elements usable to render a map view depending at least in part on how the altitude of the viewer position relates to different metadata altitudes as layered map metadata elements in the 3D space. When the request indicates one or more search terms, the server computing device determines a map metadata element based at least in part on search results of the one or more search terms, and may also determine a viewer location based on the one or more search terms. The server computing device sends one or more map metadata elements (and in some cases, viewer locations) to the client computing device.
To facilitate navigation when the viewer location changes, the server computing device may receive a second request for map information from the client computing device, where the second request indicates one or more other search terms or a second viewer location in the 3D space, for example. The server computing device may then determine additional map metadata elements having different altitudes in the 3D space, and send the additional map metadata elements (along with the second viewer position, in some cases) to the client computing device. Typically, the first set of map metadata elements and the additional map metadata elements are sufficient for the client computing device to render a new view of the map between the first viewer position (as an initial position) and the second viewer position (as a destination position), and thereby provide a smooth transition between the two viewer positions.
The foregoing and other objects, features and advantages of the invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings.
Drawings
Fig. 1 is a diagram illustrating different metadata altitudes of map metadata text labels in a 3D space.
FIG. 2 is a flow diagram illustrating a general technique for rendering map views of different altitudes depending on viewer position and map metadata elements in 3D space.
FIG. 3 is a block diagram illustrating an example software architecture of a map navigation tool that renders map views of different altitudes depending on viewer position and map metadata elements in 3D space.
Fig. 4a-4D are screenshots illustrating example map views during navigation through map metadata text labels at different metadata altitudes in 3D space as view altitude decreases.
Fig. 5a-5D are screenshots illustrating example map views during navigation through map metadata text labels at different metadata altitudes in 3D space as view altitude increases.
6a-6D are screenshots of example map views showing map metadata text labels at different metadata altitudes navigating through a 3D space as geographic locations change.
Fig. 7a-7g are screenshots of an example map view during navigation in map metadata text labels at different metadata altitudes in 3D space as view altitude and geographic position change.
FIG. 8 is a screenshot illustrating an example photo view of a feature in a map.
FIGS. 9a and 9b are flow diagrams illustrating a general technique for separately requesting and delivering geoprimitive data elements having different altitudes in 3D space.
FIG. 10 is a block diagram illustrating an example mobile computing device in conjunction with which the techniques and tools described herein may be implemented.
FIG. 11 is a diagram illustrating an example network environment in connection with which techniques and tools described herein may be implemented.
Detailed Description
Techniques and tools are described for rendering views of a map in which map metadata elements are separated from a map base layer. The map metadata elements may be layered in 3D space above a base layer of the map. For example, map metadata elements are associated with features such as buildings, roads, towns, cities, states in a map, and ground primitive data elements are placed above features with which map metadata elements are associated at elevations in 3D space that depend on the scale of the feature (lower elevations for buildings and roads, higher elevations for cities, etc.). In various scenarios, 3D layering of map metadata elements improves the overall experience of using map navigation tools.
Separating map metadata elements from the underlying layers of the map and using 3D layering of map metadata elements on the map simplifies the process of deciding which map metadata elements to reveal or hide for a given task. For example, depending on the search terms, different subsets of map metadata elements are selected for rendering. Furthermore, the altitude values provide a natural hierarchy of metadata elements when deciding which metadata elements to reveal or hide.
The 3D layering of map metadata elements on the map also facilitates smooth motion effects of zoom-in, zoom-out, and scroll operations on the map as the viewer navigates through the 3D space. To help the user maintain context throughout transitions between different view levels, map metadata elements placed in 3D may appear to be zoomed and/or rotated in context to the user's map view. To provide a parallax effect when a user moves over a map metadata element in 3D space, the map metadata element may be shifted by different amounts between different views depending on the altitude in the 3D space. These and other graphical transformations of map metadata placed in 3D space add visual depth and context to the user during navigation through the map.
Example map metadata elements with different altitudes in 3D space
In conventional map views presented by map navigation tools, the density of metadata presented on a map can create visual complexity. Selectively hiding and revealing map metadata in different views can help limit the density of metadata rendered, but transitioning between different views is problematic. The techniques and tools described herein layer map metadata elements according to altitude in 3D space. The 3D hierarchy provides a natural hierarchy of map metadata for rendering. It also provides a direct way to correlate viewer position with the selection of which map metadata elements to hide or reveal when rendering a map view.
Fig. 1 shows an example of geoprimitive data (100) having different altitudes in 3D space. Map metadata (100) is textual indicia that appears above a surface layer having road details and other geographic features. Different metadata text labels have different altitudes associated with metadata text labels in 3D space. In fig. 1, map metadata appears at six different altitudes: 50 miles, 20 miles, 36000 feet, 18000 feet, 6000 feet, and 2000 feet.
In general, map metadata elements are associated with an altitude that depends on the scale of the feature annotated by the map metadata. Large scale features such as countries and states are associated with higher altitudes. In FIG. 1, the highest altitude geoprimitive data is the text label of Washington (Washington) associated with an altitude of 50 miles. Small scale features such as buildings and small streets are associated with lower altitudes. In FIG. 1, the lowest altitude geoprimitive data is the Occidental text label associated with an altitude of 2000 feet. The map metadata text labels of Jackson St (Jackson street) and King St (King street) are associated with an altitude of 6000 feet, and the map metadata text label of Pioneer Square is associated with an altitude of 18000 feet. For progressively larger scale features, International District and Seattle are associated with altitudes of 36000 feet and 20 miles, respectively. In general, the metadata altitude of a metadata element increases with the geographic scale of the feature annotated by the metadata element.
For simplicity, fig. 1 shows only one or two examples of map metadata (100) at each indicated altitude. Real world examples of map metadata will typically include many more tags of map features. For example, the labels for many other streets in the map may be associated with altitudes of 2000 feet and 6000 feet in FIG. 1. On the other hand, in some scenarios, the map metadata rendered in the map view is purposely limited to focus on metadata that is most likely of interest to the viewer. This may be the case, for example, when primitive data is selected in response to a search or otherwise selected for a particular task.
As shown in fig. 1, the map metadata (100) may be textual labels for the location names of states, cities, neighbors, and streets. The map metadata may also be textual labels for buildings, stores within buildings, landmarks, or other features in the map. In addition to names, map metadata may be distances between features. Alternatively, the map metadata may provide additional details for a given feature such as contact information (e.g., phone numbers, web pages, addresses), comments, ratings, other annotations, menus, photos, advertising promotions, or games (e.g., geographic cache, geographic tags). The link may be provided to a Web page to launch a Web browser and navigate to information about the feature.
With respect to data organization that facilitates storage and transfer, individual elements of map metadata may have attributes or characteristics of altitude. Each map metadata element is assigned an altitude that facilitates fine-grained specification of different altitudes for different features. Alternatively, the different primitive data elements are organized in layers such that all of the primitive data elements at 2000 feet are organized together, all of the primitive data elements at 6000 feet are organized together, and so on. Organizing map metadata by elevation layer may facilitate operations that affect metadata for an entire layer. Regardless of how the map metadata is organized for storage and transfer, the 3D layering of map metadata elements may be part of the rendering process of the metadata elements in the map view.
The number of different altitudes of the ground primitive data and the values for the different altitudes depend on the implementation. To illustrate the concept of different altitudes of map metadata, fig. 1 shows six different altitudes that are far apart, but there may be more or less different altitudes in the 3D space, and the altitudes may have different values than those shown in fig. 1. In a practical use scenario, multiple layers of map metadata will typically have altitudes that are closer together, particularly in situations when elements from multiple layers are concurrently rendered for display. The elevation of the map metadata element may be static or may be dynamically adjusted from an initially assigned value for the sake of presentation. For example, as a user navigates in 3D space, the altitude values of the map metadata elements may be brought closer together so that the parallax effect is more gradual. (concurrent rendering of metadata elements with widely spaced altitudes may result in a visual effect that is too abrupt when a viewer changes geographic location.)
The number of different altitudes of the ground primitive data may affect the quality of the user experience. If too few different altitudes are used, the transition may become abrupt when many details are simultaneously revealed or hidden, and a given map view may become crowded with the abruptly appearing metadata details. On the other hand, having too many different altitudes may result in excessive computational complexity when the computing device determines the appropriate scaling, sampling adjustments, etc. of the map metadata on an element-by-element basis between different views. Furthermore, if elements with similar altitude values are handled in a significantly different manner in the rendering decision, the result may be unexpected or overwhelming for the user.
In fig. 1, the altitude of the map metadata (100) is given in feet and miles. The units used to represent the altitude values depend on the implementation; metric units or units in any other measurement system may be used. The altitude value may be a real world altitude above a surface layer of the map or above sea level. Alternatively, the altitude value is more abstract as a value along a third dimension in 3D space, where the viewer position has a value indicating placement along the third dimension.
Example rendering ground primitive data elements having different altitudes in 3D space
Fig. 2 illustrates a general technique (200) for rendering map views of different altitudes depending on viewer position and map metadata elements in 3D space. A tool, such as a mobile computing device or other client computing device, may perform the technique (200).
Initially, the tool determines (210) a viewer position (here, a view altitude, or an altitude of the viewer position) associated with an altitude in the 3D space. For example, the viewer position is initially a default viewer position, a last viewer position reached in a previous navigation session, a previously saved viewer position, or a viewer position indicated as a result of the search. In subsequent iterations, the observer position may be a destination observer position in 3D space or an intermediate observer position between a previous observer position and the destination observer position in 3D space.
The tool renders (220) the map view for display. The tool renders (220) the view based on the view altitude of the viewer position and a hierarchy in the 3D space where the map metadata element is likely to be at a different metadata altitude (i.e., altitude of the metadata element) for the different metadata elements. For example, the map metadata elements are text labels at different altitudes in 3D space, and the rendered view depends on how the view altitude of the viewer position relates to the different metadata altitudes of the map metadata text labels in 3D space. The tools may obtain the map metadata elements from the client computing device and/or from the server computing device (e.g., using the techniques described with reference to fig. 9a and 9 b).
The exact operations performed as part of the rendering (220) depend on the implementation. In some implementations, the tool determines the field of view (e.g., the volume originating from the viewing location and extending toward the surface layer) and identifies map features in the field of view (or, for distant features, the space in the field of view above the feature) taking into account the altitude, geographic location, and angle of the viewer's location. Then, for those features, the tool selects the map metadata element. This may include all map metadata elements of the identified features that may be visible in the field of view (e.g., within a threshold distance from the viewer's location). Alternatively, it may include a subset of those map metadata elements that may be visible that are relevant to the navigation scenario (e.g., search). The tool places selected map metadata elements in 3D space over features marked by these elements at their respective altitudes, and assigns the resolution of the elements depending on the metadata altitude and/or distance from the viewer position. For example, the assigned resolution indicates the size of the element and how thin or thick the element is. For example, to render a top-down view of a map, the tool places elements in the 3D space parallel to a surface layer of the map. Alternatively, to render a bird's eye view of the map, the tool places some elements in the 3D space perpendicular to, at, or near the surface layer. Finally, the tool creates a map view from the point of the surface layer and the point of the placed marker visible from the viewer position (e.g., not occluded by another feature or marker, within a threshold distance from the viewer position, rendered in one or more pixels). For example, the tool starts with a surface layer and an empty atmospheric space, and then synthesizes metadata elements that move up from the surface layer to the viewer position or out from the viewer position. This stage provides effects of rotation, scaling, shrinking towards the perspective viewpoint direction, etc. of elements and features when rendering the bird's eye view. Alternatively, the tool implements the rendering using a different order of actions, using additional actions, or using different actions (220).
While the tool primarily adjusts the rendering of the map metadata elements in a manner dependent on the metadata altitude (220), the tool may also adjust the metadata altitude itself prior to or during rendering. For example, the tool adjusts the metadata elevation of the ground primitive data elements to bring the elevations closer together. If the rendered metadata elements are too far apart in altitude, a small change in the viewer's geographic location may result in a very significant displacement of one map metadata element and a less significant displacement of another map metadata element. To avoid the parallax effect of these abrupt changes, the metadata altitudes of the elements can be brought closer together while maintaining the relative order and relative distance between the elements. This results in a more gradual change in the size and apparent position of the metadata elements as the viewer navigates through the 3D space.
Generally, to render, the tool performs various operations on map metadata elements layered in 3D space, including rotation, scaling, adjustment of sample values, and selective suppression or addition of map metadata details. 4a-4d, 5a-5d, 6a-6d, 7a-7d, and 8 illustrate examples of these and other operations for map metadata text tagging. The adjustment of how and when primitive data elements are rendered during navigation helps the viewer maintain direction during navigation. 4a-4D, 5a-5D, and 6a-6D illustrate examples of top-down views rendered during navigation through map metadata text markup layered in 3D space, where metadata altitude is the primary indicator of which metadata to render and how to render these details. Figures 7a-7g show examples of top-down and bird's-eye views. In the top-down view and the conversion from top-down to bird's-eye view, the metadata altitude is still considered a parameter that determines the level of detail and scale of the map metadata text labels. In the bird's eye view, the tool also takes into account the distance from the viewer, extending away from the viewer towards the horizon, to determine the level of detail and scale of the map metadata text labels.
Returning to FIG. 2, the tool determines (230) whether it has reached the destination viewer location in order to render the map view. The destination viewer location may be the initial (and only) viewer location, or it may be a different viewer location. If the tool has not reached the destination viewer location, the tool repeats the acts of determining (210) and rendering (220) another viewer location. The other viewer positions may be different from the previous viewer position in terms of geographic position on the 2D surface and/or view elevation in 3D space. In this way, the tool may continuously repeat the acts of determining (210) and rendering (220) the first, second, third, etc. viewer positions, or the tool may maintain the current map view.
The tool may react to user input indicating a change in viewer position. For example, the user input is a gesture input from a touch screen or a keystroke input from a keyboard. In this case, the tool determines (210) a new viewer position indicated by the input (or a new viewer position to transition towards the destination direction indicated by the input), and renders (220) a view of that new viewer position. In particular, to provide a smooth motion effect when zooming between altitudes and/or scrolling from an initial viewer position to a destination viewer position over a geographic location, the tool may repeat the determining (210) and rendering (220) actions for viewer positions between the initial and destination viewer positions. The number of map views rendered for the purpose of smoothing the motion effect depends on the implementation. For example, four map views are rendered per second. During this transition in the direction of a given destination watcher location, the tool can assign a new destination watcher location if interrupted (e.g., by user input indicating the new destination watcher location).
Example software architecture for rendering map metadata
Fig. 3 shows an example software architecture (300) of a map navigation tool that renders map views of different altitudes depending on viewer position and map metadata in 3D space. A client computing device (e.g., a smartphone or other mobile computing device) may execute software organized according to the architecture (300) to render views according to the method (200) of fig. 2 or another method.
The architecture (300) includes a device operating system (350) and a map navigation framework (310). The device OS (350) manages user input functions, output functions, storage access functions, network communication functions, and other functions of the device. The device OS (350) provides access to these functions of the map navigation framework (310).
The user may generate user input that affects map navigation. The device OS (350) includes functionality for recognizing user gestures and other user inputs and creating messages that can be used by the map navigation framework (310). An interpretation engine (314) of the map navigation framework (310) listens for user input event messages from the device OS (350). The UI event message may indicate a pan gesture, a flick gesture, a drag gesture, or other gesture on a touch screen of the device, a tap on the touch screen, a keystroke input, or other user input (e.g., a voice command, a directional button, a trackball input). The interpretation engine (314) translates the UI event message into a map navigation message that is sent to a positioning engine (316) of the map navigation framework (310).
The positioning engine (316) considers the current viewer position (which may be provided from the map settings store (311) as a saved or last viewer position), any messages from the interpretation engine (314) indicating the required change in viewing position, and geoprimitive data having different metadata altitudes in 3D space. From this information, a positioning engine (316) determines a viewer position in 3D space. The positioning engine (316) provides the viewer position and map metadata near the viewer position to the rendering engine (318).
The positioning engine (316) obtains map metadata for the map from the map metadata store (312). The map metadata store (312) caches recently used map metadata. The map metadata store (312) obtains additional or updated map metadata from local file storage or from network resources, as needed. The device OS (350) arbitrates access to storage and network resources. The ground primitive data store (312) requests map metadata from a storage or network resource through the device OS (350), the device OS (350) processes the request, receives a reply, and provides the requested map metadata to the ground primitive data store (312). In some scenarios, the request for map metadata takes the form of a search query, and map metadata responsive to the search is returned.
A rendering engine (318) processes the viewer position and map metadata in the 3D space and renders a map view. Depending on the usage scenario, the rendering engine (318) may render map metadata from a local store, map metadata from a web server, or a combination of map metadata from a local store and map metadata from a web server. The rendering engine (318) provides display commands for the rendered map view to the device OS (350) for output on the display.
Example zoom-in and zoom-out navigation
Fig. 4a-4D illustrate example map views (410, 420, 430, 440) during navigation through map metadata text labels at different metadata altitudes in 3D space as view altitude decreases. Fig. 4a shows washington state) western high-level map view (410). Washington's metadata text labels are closer to the current viewer position in 3D space than the names of the cities Seattle, Breberton, and Issaquah. The size of the label of Washington is thus larger to indicate that it is closer to the observer (and a larger scale of the annotated geographic feature).
In fig. 4b, the altitude of the viewer has decreased as the viewer zooms in toward Seattle. In the map view (420) of FIG. 4b, Washington's text labels are larger but dimmer as the elevation of the viewer's position approaches the elevation associated with Washington labels. The indicia of Seattle is larger and more intense in the map view (420) of FIG. 4b than in FIG. 4 a.
As the viewer continues to zoom in toward Seattle, the size of the Seattle indicia increases in the map views (430, 440) of FIGS. 4c and 4 d. Below the target altitude associated with a tag (here, not the altitude at which the tag is placed in 3D space) or the target distance from the tag, the tag of Seattle becomes progressively lighter. Finally, the label of Seattle is moved out of the map view, or disappears if the viewer position passes directly through it. Fig. 7a shows a top-down view (710) even closer to the map surface layer as the viewer's altitude continues to decrease. In FIG. 7a, Washington and Seattle's markup has been zoomed out of view, and the metadata details within the city have been zoomed into view.
In summary, for the zoom-in operation shown in fig. 4a-4d, the metadata text labels appear small, then gradually become larger and thicker/clearer between views. After the peak concentration/sharpness is reached, the text label continues to grow between views but also fades/disappears, or drifts away from the field of view.
For zoom-out operations, the rendering of map metadata text labels typically mirrors the rendering of zoom-in operations. Fig. 5a-5D illustrate example map views (510, 520, 530, 540) during navigation through map metadata text labels at different altitudes in 3D space as view altitude increases. In fig. 5a, the map view (510) shows a metadata text label, such as the label of Eastlake (east lake), which shrinks in size between views as the viewer shrinks away from the surface layer. As the viewer zooms in between the map views (520, 530, 540) in fig. 5b-5d, markers at higher altitudes are brought into view and rendered in large sizes between different views, and markers at lower altitudes in a wider geographic area are brought into view and rendered in small sizes. When passing directly over a marker (e.g., Washington's marker), the marker is initially dim, but becomes darker between different views as altitude increases beyond the marker.
Fig. 4a-4D and 5a-5D each illustrate the concurrent display of a map metadata text tag at different metadata altitudes in 3D space, where the size of the tag and the darkness/lightness of the tag depend on the altitude. These visual cues help orient the viewer during the zoom-in and zoom-out operations.
Example scroll navigation with parallax effect
Fig. 6a-6D show example map views (610, 620, 630, 640) during navigation through map metadata text labels at different metadata altitudes in 3D space as geographic location changes. Specifically, for the navigation shown in FIGS. 6a-6d, the elevation of the viewer position has not substantially changed, but the geographic position has moved to the north/northeast. The parallax effect of metadata text labels is shown that stay in place in 3D space above the features they mark, but appear to move between the map views (610, 620, 630, 640) of fig. 6 a-6D.
The map view (610) of FIG. 6a shows the metadata text markup of Montlake (Monte lake), such as occurs on a highway. As the viewer moves north/northeast, the position of the Montlake's markers appears to move south/southwest between the map views (620, 630, 640) of fig. 6b-6 d. Some other metadata text labels exhibit a greater or lesser degree of parallax effect. For example, the metadata text markup of Capitol Hill is moved completely out of view by the render map view (630) of FIG. 6 c. Text labels associated with low altitudes that are further away from the viewer shift a smaller distance between rendered map views.
In general, as the geographic location of the viewer changes, the distance that different metadata text labels shift between rendered top-down views depends on the metadata altitude associated with the label relative to the view altitude. Markers close to the viewer are shifted more between map views and markers further away from the viewer (close to the ground) are shifted less between map views.
Additionally, parallax effects may appear between views as the altitude changes. In fig. 5a-5c, for example, the mark of Queen ane appears to drift due to the change in the angle of the viewer's position as the elevation of the view increases. As the view elevation changes, markers near the edges are shifted more between map views and markers in the middle (closer to directly under the viewer) are shifted less between map views.
Exemplary Top-Down to bird's-eye Angle Change and bird's-eye View
Fig. 7a-7g show example map views (710, 720, 730, 740, 750, 760, 770) during navigation in 3D space as the elevation and geographic location of the views change. The transition from the top-down angle to the bird's eye view angle in the map view is illustrated by the entire progression of fig. 7a-7 g.
Fig. 7a shows a top-down map view (710) at low-view altitude. The metadata text labels for small streets are visible, albeit in small size. When the view altitude decreases past the trigger altitude, rendering the transformed map view (720, 730, 740, 750) until a final bird's eye view angle is reached. The trigger altitude depends on the implementation. For example, the triggering altitude for automatic conversion to a bird's eye view angle may be 1000 feet. Several changes occur in the transition map views (720, 730, 740, 750) of fig. 7b-7 e.
As the view elevation decreases, metadata text labels near the viewer (such as the text labels of pioneer square) are rotated away from the viewer between different views, as shown in the transition views (720, 730) of fig. 7b and 7 c. When the view altitude passes below the altitude of the mark in 3D space (see view (740) of fig. 7D), the text direction of the mark is flipped between views so that the top of the text mark is close to the viewer rather than far away from the viewer. Finally, the text labels are consistently rendered in a direction parallel to the surface level of the map in the 3D space, with the effect that the text labels float in the atmospheric space above the surface of the map in the rendered map view.
The text labels of the streets (the labels are parallel to the rendered surface layer in 3D space for the top-down view) disappear between the views (720, 730) of fig. 7b and 7 c. The text labels are re-materialized in the views (740, 750) of fig. 7D and 7e after being placed perpendicular to the surface layers in the 3D space. Placing the street markers perpendicular to the map surface, and just above the map surface, facilitates viewing street names after the markers are rendered in the bird's eye view. Generally, street signs are also parallel to the streets they sign, respectively. As shown in fig. 7e-7g, other markers (such as the marker of Pioneer Square) may remain parallel to the surface layer in the 3D space, even for the rendering of a bird's eye view.
In the bird's eye view of fig. 7e-7g (750, 760, 770), the text labels are rotated and scaled along with the map features as the viewer position changes in view altitude and/or geographic location. In addition, the text labels shrink away from the perspective viewpoint of the viewer. Rotation and zooming when the viewer position changes results in a parallax effect, even if the text labels do not move in 3D space, they appear to move between rendered views. In general, in fig. 7e-7g, text that is perpendicular to the surface map is marked parallel to the features they mark in 3D space, and this parallel orientation in 3D space is maintained by rotation between the different views as the viewer position changes.
For rendering in a bird's eye view, the size of the metadata text labels in the view and the level of detail of the metadata to be presented in the view depend on the distance from the viewer. The level of detail and size may also depend on other factors (e.g., secondary street versus primary street, metadata elements related or unrelated to the search results).
Example photo View
FIG. 8 shows an example photo view of a feature in a map (810). View (810) shows metadata details related to a particular location on a map. The metadata for a particular location is floating in 3D space adjacent to the location and perpendicular to the surface layer, and the metadata details are rendered accordingly. Outside the area of the photo view (810) showing the actual location, the portion of the view (810) has been grayed out or otherwise modified to highlight the actual location.
The viewer may transition to a photo view, for example, by selecting a single feature of the map. Alternatively, the viewer may transition to a photo view by navigating directly into a particular feature on the map.
Example client-Server protocol for request-delivery of Platypographical data
Fig. 9a and 9b illustrate a general technique (900, 940) for separately requesting and delivering geoprimitive data having different metadata altitudes in 3D space. A client computing device, such as a mobile computing device, may perform the techniques for requesting map metadata (900), and a server computing device, such as a web server, may perform the techniques for delivering map metadata (940).
Initially, a client computing device determines (910) a request for map information. The request may include a viewer position in the 3D space, or the request may include one or more search terms that can generate a search for the viewer position in the 3D space. The client computing device sends (920) the request to the server computing device.
The server computing device receives (950) a request for map information for a map (e.g., specifying a viewer location in a 3D space, or specifying a search term in response to which a viewer location in a 3D space may be identified). From the requested information, the server computing device determines (960) geoprimitive data elements having different metadata altitudes in 3D space. For example, the server computing device uses the metadata altitude as a control parameter for the element visibility to find a map metadata element that is visible from the viewer position specified in the request. Alternatively, the server computing device uses the metadata altitude as a level of detail control parameter for the search results to find the viewer position from the search results for the search term specified in the request, to find the map metadata elements from the search results, and to select some or all of those map metadata elements that are visible from the viewer position. The map metadata elements may be used to render map views depending at least in part on how the elevation of the viewer's location correlates to different elevations of the map metadata elements as layered in 3D space according to their respective metadata elevations.
The server computing device sends (970) the map metadata element to the client computing device. The server computing device may receive a second request for map information from the same client computing device. The server computing device then determines additional map metadata elements and sends the additional elements to the client computing device. In some cases, the initial and additional map metadata elements are sufficient to cause the client computing device to render a new view of the map for each of a plurality of new viewer positions between the first viewer position and the second viewer position.
Returning to fig. 9a, the client computing device receives (980) map metadata from the server computing device and renders (990) one or more views of the map based on the view altitude of the viewer position and the metadata altitude of the map metadata elements in 3D space. For example, the client computing device renders the view according to the method (200) of fig. 2 or another method. When rendering map metadata elements received from a server computing device, a client computing device may combine map metadata elements from other server computing devices and/or the client computing device itself.
A server computing device (when providing the map metadata) or a client computing device (when placing the map metadata elements) may order the map metadata elements based on what the user may want. To conduct the search, the server computing device may give a higher ranking to metadata that is more likely to be useful to the viewer, such that the metadata is presented earlier and/or given more prominence. For example, if the search is looking for information about a particular type of restaurant or store, the server computing device assigns a higher ranking to the selected metadata than metadata of other restaurants/stores, and also raises the ranking of the selected metadata relative to other types of map metadata (e.g., for streets). Alternatively, if the search is looking for information about a particular street, the server computing device raises the ranking of the street and direction metadata relative to other types of metadata. This allows the ranked promoted metadata to appear faster and/or with more prominence during the 3D layered combined scene-based rendering of the search results with the map metadata.
Although fig. 9a and 9b illustrate the actions of a single client computing device and a single server computing device, one server computing device may provide requests from multiple client computing devices. Further, a given client computing device may select between multiple server computing devices. The server process may be stateless, or the server computing device may remember state information for a particular client computing device between different requests.
Example Mobile computing device
Fig. 10 depicts a detailed example of a mobile computing device (1000) capable of implementing the techniques and solutions described herein. The mobile device (1000) includes various optional hardware and software components shown generally at (1002). Any component (1002) in the mobile device may communicate with any other component, although not all connections are shown for ease of illustration. The mobile device may be any of a variety of computing devices (e.g., a cellular phone, a smart phone, a handheld computer, a laptop computer, a notebook computer, a tablet device, a netbook, a Personal Digital Assistant (PDA), a camera, a camcorder, etc.) and may allow wireless two-way communication with one or more mobile communication networks (1004), such as a wireless fidelity (Wi-Fi), cellular, or satellite network.
The illustrated mobile device (1000) may include a controller or processor (1010) (e.g., a signal processor, microprocessor, Application Specific Integrated Circuit (ASIC), or other control and processing logic circuitry) for performing tasks such as signal coding, data processing, input/output processing, power control, and/or other functions. An operating system (1012) may control the allocation and use of components (1002) and support for one or more application programs (1014). In addition to map navigation software, the application programs may include general mobile computing applications (e.g., email applications, calendars, contact managers, web browsers, messaging applications), or any other computing application.
The illustrated mobile device (1000) may include memory (1020). Memory (1020) may include non-removable memory (1022) and/or removable memory (1024). The non-removable memory (1022) may include RAM, ROM, flash memory, a hard disk, or other well-known memory storage technologies. The removable memory (1024) may include flash memory or a Subscriber Identity Module (SIM) card as is known in GSM communication systems, or other known memory storage technologies such as "smart cards". The memory (1020) may be used to store data and/or code for running an operating system (1012) and application programs (1014). Example data may include web pages, text, images, sound files, video data, or other data sets sent to and/or received from one or more network servers or other devices via one or more wired or wireless networks. The memory (1020) may be used to store a subscriber identifier, such as an International Mobile Subscriber Identity (IMSI), and a device identifier, such as an International Mobile Equipment Identifier (IMEI). These identifiers may be transmitted to a network server to identify the user and the device.
The mobile device (1000) can support one or more input devices (1030) such as a touch screen (1032), a microphone (1034), a camera (1036) (e.g., capable of capturing still and/or video images), a physical keyboard (1038) and/or a track ball (1040), and one or more output devices (1050) such as a speaker (1052) and a display (1054). Other possible output devices (not shown) may include piezoelectric or other haptic output devices. Some devices may provide more than one input/output function. For example, the touch screen (1032) and display (1054) may be combined in a single input/output device.
The wireless modem (1060) may be coupled to an antenna (not shown) and may support bi-directional communication between the processor (1010) and external devices, as is well understood in the art. The modem (1060) is shown generally and may include a cellular modem for communicating with the mobile communications network (1004) and/or other radio-based modems, such as bluetooth (1064) or Wi-Fi (1062). The wireless modem (1060) is typically configured for communication with one or more cellular networks, such as a GSM network for data and voice communication within a single cellular network, between cellular networks, or between a mobile device and a Public Switched Telephone Network (PSTN).
The mobile device may further include at least one input/output port (1080), a power supply (1082), a satellite navigation system receiver (1084) such as a Global Positioning System (GPS) receiver, an accelerometer (1086), a transceiver (1088) (for wirelessly transmitting analog or digital signals), and/or a physical connector (1076), which may be a USB port, an IEEE 1394 (firewire) port, and/or an RS-232 port. The illustrated components (1002) are not required or all-inclusive, as any components can be deleted and other components can be added.
The mobile device (1000) may implement the techniques described herein. For example, the processor (1010) may determine a viewer position and render a map view during map navigation in the 3D space. The processor (1010) may also process user input to determine a change in viewer position. As a client computing device, the mobile computing device may send a request to the server computing device and receive map metadata back from the server computing device.
Example network Environment
FIG. 11 illustrates a general example of a suitable implementation environment (1100) in which the described embodiments, techniques, and technologies may be implemented. In an example environment (1100), various types of services (e.g., computing services) are provided by a computing "cloud" (1110). For example, the cloud (1110) may include a collection of computing devices, which may be placed centrally or distributively, that provide cloud-based services to various types of users and devices via a network such as the internet. The implementation environment (1100) may be used in different ways to implement computing tasks. For example, some tasks (e.g., processing user input and presenting a user interface) may be performed on a local computing device (e.g., connected devices (1130) - (1132)), while other tasks (e.g., storing data to be used in subsequent processing) may be performed in cloud (1110).
In an example environment (1100), a cloud (1110) provides services for connected devices (1130) - (1132) having various screen capabilities. Connected device (1130) represents a device having a computer screen (e.g., a mid-sized screen). For example, the connected device (1130) may be a personal computer, such as a desktop computer, a laptop computer, a notebook, a netbook, and so forth. Connected device (1131) represents a device having a mobile device screen (e.g., a mini-screen). For example, the connected device (1131) may be a mobile phone, a smart phone, a personal digital assistant, a tablet computer, and so on. The connected device (1132) represents a device having a large screen. For example, connected device (1132) may be a television screen (e.g., a smart television) or another device connected to a television (e.g., a set-top box or game console), and so on. One or more of the connected devices (1130) - (1132) may include touch screen capabilities. Devices without screen capabilities may also be used in the example environment (1100). For example, the cloud (1110) may provide services for one or more computers (e.g., server computers) that do not have displays.
The services may be provided by the cloud (1110) through the service provider (1120), or through other providers of online services (not depicted). For example, the cloud service may be customized to the screen size, display capabilities, and/or touch screen capabilities of a particular connected device (e.g., connected devices (1130) - (1132)). Service providers (1120) may provide a centralized solution for various cloud-based services. Service provider (1120) may manage service subscriptions for users and/or devices (e.g., connected devices (1130) - (1132) and/or their respective users).
In the example environment (1100), the 3D map navigation techniques and solutions described herein may be implemented using any of the connected devices (1130) - (1132) as client computing devices. Similarly, any of the various computing devices in the cloud (1110) or service provider (1120) may perform the role of a server computing device and deliver map metadata to connected devices (1130) - (1132).
Substitutions and changes
Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, in some cases, operations described sequentially may be rearranged or performed concurrently. Moreover, for the sake of brevity, the attached figures may not show the various ways in which the disclosed methods can be used in conjunction with other methods.
Any of the disclosed methods can be implemented as computer-executable instructions stored on one or more computer-readable media (e.g., non-transitory computer-readable media, such as one or more optical media discs (such as DVDs or CDs), volatile memory components (such as DRAMs or SRAMs), or non-volatile memory components (such as hard drives)) and executed on a computer (e.g., any commercially available computer, including smart phones or other mobile devices that include computing hardware). Any of the computer-executable instructions for implementing the disclosed techniques, as well as any data created and used during implementation of the disclosed embodiments, can be stored on one or more computer-readable media (e.g., non-transitory computer-readable media). The computer-executable instructions may be, for example, a dedicated software application or a portion of a software application that is accessed or downloaded via a web browser or other software application, such as a remote computing application. Such software can be executed, for example, on a single local computer (e.g., any suitable commercially available computer) or in a network environment (e.g., via the internet, a wide area network, a local area network, a client-server network (such as a cloud computing network), or other such network) using one or more network computers.
For clarity, only certain selected aspects of the software-based implementation are described. Other details known in the art are omitted. For example, it should be understood that the disclosed technology is not limited to any particular computer language or program. For example, the disclosed technology may be implemented by software written in C + +, Java, Perl, JavaScript, Adobe Flash, or any other suitable programming language. Also, the disclosed techniques are not limited to any particular computer or type of hardware. Certain details of suitable computers and hardware are well known and need not be set forth in detail in this disclosure.
Further, any of the software-based embodiments (including, for example, computer-executable instructions for causing a computer device to perform any of the disclosed methods) may be uploaded, downloaded, or remotely accessed through suitable communication means. Such suitable communication means include, for example, the internet, the world wide web, an intranet, software applications, electrical cables (including fiber optic cables), magnetic communications, electromagnetic communications (including RF, microwave, and infrared communications), electronic communications, or other such communication means.
The disclosed methods, apparatus, and systems should not be considered limiting in any way. Rather, the present disclosure is directed to all novel and non-obvious features and aspects of the various disclosed embodiments, alone and in various combinations and sub-combinations with each other. The disclosed methods, apparatus, and systems are not limited to any specific aspect or feature or combination thereof, nor do the disclosed embodiments require that any one or more specific advantages be present or problems be solved. In view of the many possible embodiments to which the principles of the disclosed invention may be applied, it should be recognized that the illustrated embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. Rather, the scope of the invention is defined by the appended claims. Accordingly, all that comes within the spirit and scope of these claims is claimed as the invention.
Claims (18)
1. In a computing device comprising a memory and a processing unit, a method of rendering a map view for map navigation, the method comprising, by the computing device:
determining a first viewer position, the first viewer position associated with a view altitude in a 3D space;
determining a first map metadata element having a first metadata altitude in the 3D space, wherein the first map metadata element is associated with a first feature of a map;
determining a second geoprimitive data element having a second metadata altitude in the 3D space, wherein the second geoprimitive data element is associated with a second feature of the map, and wherein the second metadata altitude is different from the first metadata altitude; and
rendering a display of a first view of the map based at least in part on the first viewer position and a hierarchy of the first and second geoprimitive data elements in the 3D space at different metadata altitudes on a surface layer of the map, wherein the surface layer includes first and second features of the map that mark the first and second features of the map in the first view of the map, wherein rendering the first view depends at least in part on a relationship of the view altitude to the different metadata altitudes of the first and second geoprimitive data elements in the 3D space;
determining a second viewer position based at least in part on an input indicative of a change in viewer position; and
rendering a display of a second view of the map based at least in part on the second viewer position and a hierarchy of the first and second geoprimitive data elements in the 3D space at different metadata altitudes on a surface layer of the map, wherein the first and second geoprimitive data elements mark first and second features of the map in the second view of the map, wherein rendering the second view depends at least in part on a relationship of the view altitude of the second viewer position and the different metadata altitudes of the first and second geoprimitive data elements in the 3D space;
wherein the first view and the second view are top-down views of the map, wherein, in the first and second views, the first and second map metadata elements are rendered in parallel directions in the 3D space to a surface layer of the map, and wherein compared to the first view, the first map metadata element is shifted by a different distance than the second map metadata element due to a parallax effect in the second view, to account for changes in geographic location between the first viewer location and the second viewer location in the 3D space, the first and second map data elements are located in the 3D space above the first and second features marked by the first and second map data elements, respectively, but appear to move between the first and second views;
determining a third viewer position; and
rendering a display of a third view of the map based at least in part on the third viewer position, wherein rendering the third view depends at least in part on a relationship of a view altitude of the third viewer position to different metadata altitudes of the first and second map metadata elements in the 3D space, wherein the third view is a bird's eye view of the map, and wherein in the bird's eye view the first map metadata element is rendered in the 3D space in a direction perpendicular to a surface layer of the map.
2. The method of claim 1, wherein the first and second map metadata elements are map metadata text labels indicating one or more of a title, distance, and detail, respectively, with respect to the first and second features on the map.
3. The method of claim 1, wherein rendering the first view of the map comprises:
placing the first and second graphical primitive data elements over the first and second features, respectively, in the 3D space, the first and second graphical primitive data elements being placed at respective metadata elevations of the first and second graphical primitive data elements; and
creating a first view of the map from points of the surface layer and points of the first and second map metadata elements visible from the first viewer position.
4. The method of claim 3, wherein rendering the first view of the map further comprises:
prior to placement, determining which map metadata elements to render based at least in part on metadata altitude as a level of detail control parameter; and
as part of the placing, assigning resolutions to the first and second geoprimitive data elements in the 3D space, respectively, depending in part on the relationship of the view altitude to the respective metadata altitudes of the first and second geoprimitive data elements;
wherein the determination of which points of the surface layer are visible and which points of the first and second graphical element data elements are visible varies with the angle of the first observer position.
5. The method of claim 1, wherein the first viewer position is associated with a first geographic position having a view altitude above the first geographic position in the 3D space, wherein the second viewer position is associated with a second geographic position having a view altitude above the second geographic position in the 3D space, and wherein the first geographic position is different from the second geographic position.
6. The method of claim 1, further comprising, by the computing device:
for each of one or more new viewer positions between the first viewer position and the second viewer position:
determining the new observer position; and
based at least in part on the new viewer position and the first and second in the 3D space
Layering of map metadata elements at different metadata altitudes to render a new view of the map
Display of a graph in which rendering the new view depends at least in part on the view of the new viewer position
Map altitude and a different number of said first and second geodetic data elements in said 3D space
According to the altitude relationship.
7. The method of claim 1, wherein in the bird's eye view the second map data element is rendered in the 3D space in a direction parallel to the surface layer and over the third viewer position of the bird's eye view.
8. The method of claim 1, wherein the first map metadata element is a map metadata text label, and wherein the map metadata text label is further rotated and scaled under the bird's eye view along with features in the map.
9. The method of claim 1, wherein the method further comprises, by the computing device:
rendering a photo view of a first feature of the map along with a display of textual detail about the first feature, wherein sample values of portions of the photo view are modified to highlight the first feature within the photo view.
10. The method of claim 6, wherein repeatedly determining and rendering the one or more new viewer positions provides the effect of smooth scrolling and/or smooth zooming among the first and second map metadata elements by updating a display at least four times per second.
11. The method of claim 6, wherein the new view is a top-down view of the map, wherein the first map metadata element is a map metadata text label, and wherein:
the marker becomes larger and denser between different views as the view altitude of the viewer position decreases and approaches a target altitude or target distance in the 3D space; and
the markers become larger but lighter between different views as the view altitude of the viewer position further decreases below a target altitude or target distance in the 3D space.
12. The method of claim 1, wherein the first map metadata element is a map metadata text tag, and wherein:
as the view altitude of the viewer position decreases and approaches the metadata altitude associated with a marker in the 3D space, the marker rotates between different views; and
the tags flip between views at the metadata altitude associated with the tag in the 3D space.
13. A method that facilitates map navigation performed by a server computing device, the method comprising, by the server computing device:
receiving a request for map information from a client computing device, wherein the request indicates one or more search terms;
determining first and second ground metadata elements having different metadata altitudes in 3D space based at least in part on a search result of the one or more search terms and based at least in part on a metadata altitude as a level of detail control parameter of the search result, wherein the first map metadata element is associated with a first feature of a map, and wherein the second map metadata element is associated with a second feature of the map, and wherein the first and second graphical metadata elements are operable to render a view of the map at least partially in dependence upon a relationship of a view altitude of a viewer position to different metadata altitudes of the first and second graphical metadata elements, the first and second geoprimitive data elements are layered in the 3D space according to their respective metadata altitudes; and
sending the first and second graphical primitive data elements to the client computing device.
14. The method of claim 13, wherein the request is a first request, wherein the watcher location is a first watcher location, the method further comprising, by the server computing device:
receiving a second request for map information for the map from the client computing device;
determining one or more additional primitive data elements; and
sending the one or more additional primitive data elements to the client computing device.
15. The method of claim 13, wherein the first and second map metadata elements are map metadata text labels indicating one or more of a title, distance, and detail about a feature on the map, and wherein a surface layer comprises the first and second features of the map.
16. A method of rendering a map view using map metadata elements layered at different metadata altitudes in a 3D space, wherein respective ones of the map metadata elements are associated with respective features of the map, wherein the method comprises:
determining a first map metadata element having a first metadata altitude in the 3D space, wherein the first map metadata element is associated with a first feature of a map;
determining a second geoprimitive data element having a second metadata altitude in the 3D space, wherein the second geoprimitive data element is associated with a second feature of the map, and wherein the second metadata altitude is different from the first metadata altitude; and
for each of one or more viewer positions between the initial viewer position and the destination viewer position:
determining the viewer position, the viewer position associated with a view altitude in the 3D space; and
rendering display of the map view based at least in part on the viewer position and a layering of the first and second geoprimitive data elements in the 3D space at different metadata altitudes, wherein the first and second geoprimitive data elements are placed over the first and second features in the 3D space at the metadata altitudes indicated for the first and second geoprimitive data elements, respectively, wherein rendering the view comprises creating the map view from points of a surface layer and points of the first and second geoprimitive data elements that are visible from the viewer position, and wherein the determination of which points of the surface layer are visible and which points of the first and second geoprimitive data elements are visible varies with the angle of the viewer position;
wherein the first map metadata element is a map metadata text label, and wherein:
the marker becomes larger and denser between different views as the view altitude of the viewer position decreases and approaches a target altitude or target distance in the 3D space; and
the markers become larger but lighter between different views as the view altitude of the viewer position further decreases below a target altitude or target distance in the 3D space.
17. A system that facilitates map navigation performed by a server computing device, the system comprising, by the server computing device:
means for receiving a request for map information from a client computing device, wherein the request indicates one or more search terms;
means for determining first and second geometadata elements having different metadata altitudes in 3D space based at least in part on a search result of the one or more search terms and based at least in part on a metadata altitude as a level of detail control parameter of the search result, wherein the first map metadata element is associated with a first feature of a map, and wherein the second map metadata element is associated with a second feature of the map, and wherein the first and second graphical metadata elements are operable to render a view of the map at least partially in dependence upon a relationship of a view altitude of a viewer position to different metadata altitudes of the first and second graphical metadata elements, the first and second geoprimitive data elements are layered in the 3D space according to their respective metadata altitudes; and
means for sending the first and second map metadata elements to the client computing device.
18. A system for rendering a map view using map metadata elements layered at different metadata altitudes in a 3D space, wherein respective ones of the map metadata elements are associated with respective features of the map, wherein the system comprises:
means for determining a first map metadata element having a first metadata altitude in the 3D space, wherein the first map metadata element is associated with a first feature of a map;
means for determining a second geoprimitive data element having a second metadata altitude in the 3D space, wherein the second geoprimitive data element is associated with a second feature of the map, and wherein the second metadata altitude is different from the first metadata altitude; and
for each of one or more viewer positions between the initial viewer position and the destination viewer position:
means for determining the viewer position, the viewer position associated with a view altitude in the 3D space; and
means for rendering a display of the map view based at least in part on the viewer position and a layering of the first and second map metadata elements in the 3D space at different metadata altitudes, wherein the first and second geoprimitive data elements are placed over the first and second features in the 3D space at the metadata altitudes indicated for the first and second geoprimitive data elements, respectively, wherein rendering the view comprises creating the map view from points of a surface layer and points of the first and second map metadata elements visible from the viewer position, and wherein the determination of which points of the surface layer are visible and which points of the first and second graphical element data are visible varies with the angle of the observer position;
wherein the first map metadata element is a map metadata text label, and wherein:
the marker becomes larger and denser between different views as the view altitude of the viewer position decreases and approaches a target altitude or target distance in the 3D space; and
the markers become larger but lighter between different views as the view altitude of the viewer position further decreases below a target altitude or target distance in the 3D space.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/842,880 | 2010-07-23 | ||
| US12/842,880 US8319772B2 (en) | 2010-07-23 | 2010-07-23 | 3D layering of map metadata |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1165544A1 HK1165544A1 (en) | 2012-10-05 |
| HK1165544B true HK1165544B (en) | 2015-06-19 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CA2804634C (en) | 3d layering of map metadata | |
| CN104040546B (en) | Method and system for displaying panoramic images | |
| US9273979B2 (en) | Adjustable destination icon in a map navigation tool | |
| US8868338B1 (en) | System and method for displaying transitions between map views | |
| KR101298422B1 (en) | Techniques for manipulating panoramas | |
| EP2589024B1 (en) | Methods, apparatuses and computer program products for providing a constant level of information in augmented reality | |
| KR101233534B1 (en) | Graphical user interface for presenting location information | |
| US20120303263A1 (en) | Optimization of navigation tools using spatial sorting | |
| JP2017536527A (en) | Providing in-navigation search results that reduce route disruption | |
| KR20090047487A (en) | Graphical user interface, computer readable media, and visual presentation method | |
| KR20140024005A (en) | Navigation system with assistance for making multiple turns in a short distance | |
| JP6038099B2 (en) | SEARCH SERVICE PROVIDING DEVICE AND METHOD, AND COMPUTER PROGRAM | |
| KR20190089689A (en) | Method and apparatus for providing street view, and computer program for executing the method | |
| HK1165544B (en) | 3d layering of map metadata | |
| KR20150088537A (en) | Method and programmed recording medium for improving spatial of digital maps | |
| Baldauf et al. | A device-aware spatial 3D visualization platform for mobile urban exploration | |
| HK1193153A (en) | Navigation system with assistance for making multiple turns in a short distance | |
| HK1219548B (en) | Techniques for manipulating panoramas |