HK40016105A - Mixed reality viewer system and method - Google Patents
Mixed reality viewer system and method Download PDFInfo
- Publication number
- HK40016105A HK40016105A HK62020006263.1A HK62020006263A HK40016105A HK 40016105 A HK40016105 A HK 40016105A HK 62020006263 A HK62020006263 A HK 62020006263A HK 40016105 A HK40016105 A HK 40016105A
- Authority
- HK
- Hong Kong
- Prior art keywords
- viewer
- real
- user
- generation system
- computer graphics
- Prior art date
Links
Description
Cross Reference to Related Applications
This application claims priority and benefit of U.S. provisional patent application No. 62/467,817 entitled "SYSTEMS AND METHODS FOR DIGITAL OVERLAY IN AN amusment PARK entry", filed on 6/3.2017, which is incorporated herein by reference IN its entirety.
Background
The subject matter disclosed herein relates to amusement park attractions, and more particularly, to providing an augmented and virtual reality experience in amusement park attractions.
An amusement park or theme park may include various entertainment attractions that provide entertainment to guests of the amusement park (e.g., family and/or people of all ages). Generally, attractions may include a subject environment that may be established using equipment, furniture, architectural layouts, props, decorations, and the like. Depending on the complexity of the subject environment, it may prove very difficult and time consuming to set up and replace the subject environment. Furthermore, it may be very difficult to set up a themed environment that is pleasant to all guests. In fact, the same theme environment may appeal to some guests but not others. Thus, it is now recognized that it is desirable to: including sights in which it is possible to change sight themes in a flexible and efficient manner relative to conventional techniques, or to include or remove certain theme features in such sights. It is now also recognized that it may be desirable to: enhance the guest's immersive experience of such attractions and provide the guest with a more personalized or customized experience.
Disclosure of Invention
The following summarizes certain embodiments that are commensurate in scope with the present disclosure. These embodiments are not intended to limit the scope of the present disclosure, but rather, they are intended only to provide a brief summary of possible forms of the embodiments. Indeed, this embodiment may encompass a variety of forms that may be similar to or different from the embodiments set forth below.
In one embodiment, a mixed reality viewing system includes: a viewer configured to be secured to the stable platform and operable by a user to view the subject through the viewer. The viewer includes: a display device, a user interface including a zoom control, and at least one sensor including at least one camera. The mixed reality viewing system further comprises: a computer graphics generation system communicatively coupled to the viewer. The computer graphics generation system is configured to generate streaming media of the real-world environment based on image data captured via at least one camera of a viewer, generate augmented reality graphics, virtual reality graphics, or both, which are superimposed (superaggressive) on the streaming media of the real-world environment, and transmit the streaming media of the real-world environment together with the superimposed augmented reality graphics, virtual reality graphics, or both, for display on a display device of the viewer.
In another embodiment, a fixed position viewer comprises: a display configured to display streaming media to a user, wherein the displayed streaming media comprises a Mixed Reality (MR) environment comprising Augmented Reality (AR) graphics, virtual reality (MR) graphics, or both. The fixed position viewer includes a camera configured to capture image data of a real world environment surrounding the fixed position viewer. The fixed position viewer further comprises: at least one sensor configured to collect information related to the generation of the streaming media.
In another embodiment, a method comprises: receiving and analyzing real-time data via a computer graphics generation system, wherein receiving real-time data comprises: data from at least one sensor of a fixed position viewer and a user interface is received via a computer graphics generation system. The method comprises the following steps: generating, via a computer graphics generation system, a game effect based on the received real-time data, wherein the game effect comprises Augmented Reality (AR) graphics, Virtual Reality (VR) graphics, or both. The method comprises the following steps: the generated game effect is superimposed onto a visualization of the real-world environment via a computer graphics generation system to produce a mixed reality visualization. The method further comprises the following steps: the mixed reality visualization is transmitted to the fixed positioning viewer via a computer graphics generation system, and the mixed reality visualization is displayed on a display of the fixed positioning viewer via the computer graphics generation system.
Drawings
These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
FIG. 1 illustrates an embodiment of an amusement park having a subject attraction augmented by an augmented reality and/or virtual reality (AR/VR) system including one or more fixed-position AR/VR viewers in accordance with the present embodiments;
FIG. 2 is a perspective view of an embodiment of the fixed positioning AR/VR viewer of FIG. 1 in accordance with the present embodiments;
FIG. 3 is a block diagram of an embodiment of the AR/VR system of FIG. 1 in accordance with the present embodiments;
FIG. 4 is a schematic diagram illustrating an example transition between an AR and VR environment rendered by the AR/VR system of FIG. 1 in accordance with the present embodiments;
FIG. 5 is a schematic diagram illustrating an example AR/VR environment rendered by the AR/VR system of FIG. 1 for a plurality of fixed position AR/VR viewers, in accordance with the present embodiments; and
FIG. 6 is a flow chart illustrating a process of creating an AR/VR experience using the AR/VR system of FIG. 1 in accordance with the present embodiments.
Detailed Description
One or more specific embodiments of the present disclosure will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
The present embodiments relate to systems and methods for: the systems and methods provide an Augmented Reality (AR) experience, a Virtual Reality (VR) experience, a mixed reality (e.g., a combination of AR and VR) experience, or a combination thereof, as part of an attraction with an amusement park or theme park. In particular, an AR and/or VR (AR/VR) system may include one or more viewers to provide an AR/VR experience to guests of an amusement park. For example, a guest may stare at a landscape through a viewer, and the viewer may facilitate an AR experience, a VR experience, or a combination of both experiences. In one embodiment, the viewer may be a fixed position viewer (e.g., a viewer similar to binoculars, viewfinders, or telescopes, which is fixed to a stable platform or the ground). Thus, the viewer may be referred to as a fixed position AR/VR viewer. A fixed-position AR/VR viewer may include at least one camera that may be used to capture real-time image data (e.g., pictures and/or videos captured during live use and transmitted substantially in real-time) of a real-world environment (e.g., aspects of a physical amusement park). The fixed position AR/VR viewer may include a display. For example, a fixed positioning AR/VR viewer may include: at least two displays, one for each eye of a user using a fixed position AR/VR viewer. Fixed positioning AR/VR viewers can be designed to rotate and tilt so that the user can change the viewing angle, look around the scene, and so on.
The AR/VR system may include a computer graphics generation system that receives real-time image data (e.g., pictures and/or video captured during live use and transmitted in substantially real-time) from the fixed-position AR/VR, and may render a video stream of the real-world environment to a display of a fixed-position AR/VR viewer along with various AR, VR, or combined AR and VR (AR/VR) graphics images. In one embodiment, a fixed position AR/VR viewer may be operable to zoom in or out on certain areas in the AR/VR environment, as well as to transition between the AR and VR environments. In particular, the user may zoom in on an area (e.g., feature, object) in the AR environment, and as the user continues to zoom in, the video stream transitions to the VR environment. In one embodiment, the fixed position AR/VR viewer may be operated by a user via a user interface (e.g., one or more buttons, a joystick) of the fixed position AR/VR viewer to blend in or interact with (e.g., grab, select, align, and/or move) a feature or object in the AR/VR environment. In addition, certain embodiments of the AR/VR system may provide a similar experience for multiple users, for example using a series of networked fixed-position AR/VR viewers.
Although the present embodiment may be implemented with a wide variety of settings, an example amusement park 10 having features of the present disclosure is depicted in FIG. 1. As illustrated, the amusement park 10 includes theme attractions 12. The theme spots 12 may include: solid structures 14 corresponding to the subject matter include fixtures, building layouts, props, ornaments, and the like. In the illustrated example, the theme spots 12 are decorated as farm/barn houses. The theme spots 12 may include an AR/VR system 15 that includes one or more fixed-position AR/VR viewers 16 to create a more immersive, personalized, and/or interactive experience for guests of the amusement park 10. In particular, guests or users may look around the subject attraction 12 through a fixed positioning AR/VR viewer 16 for an enhanced viewing experience. The fixed position AR/VR viewer 16 may be secured to a stable platform or ground 18 and the user 20 may access the fixed position AR/VR viewer 16 and use the fixed position AR/VR viewer 16 to look around the subject sight 12.
The fixed position AR/VR viewer 16 may have the functionality that a typical binocular or viewfinder would have. For example, the fixed positioning AR/VR viewer 16 may be rotated or tilted by the user 20 to view different areas of the subject scene 12. For example, the fixed positioning AR/VR viewer 16 may have a zooming effect such that the user 20 may zoom in or out on the area of the subject sight 12. Further, fixedly positioning the AR/VR viewer 16 may facilitate an AR experience, a VR experience, or a combination of both experiences. In particular, the fixed position AR/VR viewer 16 may render the AR/VR environment 22 on the display 24, and the AR/VR environment 22 may include AR/VR graphics 26. In the illustrated example, a guest looking at a theme spot 12 may only see a barn room 28 without using a fixed-position AR/VR viewer 16. However, a user 20 using a fixed positioning AR/VR viewer 16 can see a barn room 28 and AR/VR graphics 26, such as two horses 30 in front of the barn room 28.
A perspective view of an embodiment of the fixedly positioned AR/VR viewer 16 is shown in fig. 2. As shown, the fixed position AR/VR viewer 16 may be fixed to a stable platform or ground 18. The fixed position AR/VR viewer 16 may include a viewer portion 40 and a fixture portion 42, the viewer portion 40 including the display 24, and the fixture portion 42 securing the fixed position AR/VR viewer 16 to the stabilized platform or ground 18. The user 20 may stand in the viewing area 44 to view the display 24 of the fixedly positioned AR/VR viewer 16. The display 24 may include one or more displays (e.g., two displays 46, one for each eye of the user 20). The display 46 may have any suitable shape, such as circular, square, rectangular, oval, and the like. The display 46 may have a characteristic dimension 48. In some embodiments, display 24 may be configured in such a way that guests near user 20 may also see the content being presented to user 20. For example, the characteristic dimension 48 of the display 46 is large enough so that guests behind and/or adjacent to the user 20 can see what is being shown on the display 46.
The fixed position AR/VR viewer 16 has a viewing angle 49. In one embodiment, user 20 may change viewing angle 49 by rotating or tilting viewer portion 40 in a rotational direction 50 (e.g., substantially parallel to stabilized platform or ground 18), in a rotational direction 52 (e.g., substantially parallel to rotational direction 50), or a combination thereof. In one embodiment, the user 20 may also change the viewing angle 49 by raising or lowering the viewer portion 40 in a direction 54 (e.g., a direction perpendicular to the stabilized platform or ground 18). As can be appreciated, the fixed-position AR/VR viewer 16 may include other hardware and/or software components, as will be discussed in fig. 3.
FIG. 3 is a block diagram of various components of the AR/VR system 15. In the illustrated embodiment, the AR/VR system 15 includes: one or more fixed-position AR/VR viewers 16 are communicatively and operatively coupled to a computer graphics generation system 60 (e.g., within amusement park 10) via a communication network 62. The communication network 62 may include a wireless local area network, a wireless wide area network, near field communication, and/or a wired network via ethernet cable, fiber optics, and the like. The one or more fixed-position AR/VR viewers 16 may transmit signals or data to the computer graphics generation system 60 and receive signals or data from the computer graphics generation system 60 to create the AR/VR environment 22 (e.g., AR/VR graphics 26 and/or sound effects presented via the one or more fixed-position AR/VR viewers 16). The computer graphics generation system 60 may be communicatively coupled to a data server 64 via a communication network 62. Data server 64 may be a remote or on-site data server that may store and/or process user information for users 20. The user information may include any suitable information provided or authorized by the user 20, such as payment information, membership information, personal information (e.g., age, height, special needs, etc.), game information (e.g., information about the video game associated with the theme spots 12, information about the particular character with which the user 20 is associated in the video game, information about the game history of the user 20), and so forth.
As illustrated in fig. 3, the fixed position AR/VR viewer 16 may include a sensor 66, a user interface 68, a presentation device 70, and a data encoder 72 communicatively coupled to the sensor 66 and the user interface 68. The data encoder 72 may receive and/or process (e.g., encode) data or signals provided by the sensors 66 and the user interface 68. For example, the encoder 72 may be implemented as one or more processors that may follow a particular algorithm to collect streamable data provided by the sensors 66 and the user interface 68 and generate encoded data. The encoder 72 may be communicatively coupled to the computer graphics generation system 60, such as via the communication network 62, to stream encoded data (corresponding to data and signals from the sensors 66 and the user interface 68) to the computer graphics generation system 60. Encoder 72 may stream the encoded data to computer graphics generating system 60 in substantially real time and/or upon receiving instructions from computer graphics generating system 60.
The sensors 66 may include one or more cameras 74, one or more orientation and positioning sensors 76 (e.g., accelerometers, magnetometers, gyroscopes, Global Positioning System (GPS) receivers, one or more multiple degree of freedom (MDOF) Inertial Measurement Units (IMU), etc.), one or more light sensors 78, one or more presence sensors 80 (e.g., motion sensors, ultrasonic sensors, reflection sensors, broken-beam (break-beam) sensors, etc.), and one or more antennas 82.
The one or more cameras 74 may capture real-world images (e.g., images of the real-world environment and/or real-time video data, such as the subject sights 12) during live use of the user 20. The one or more cameras 74 may transmit the captured real-world images substantially in real-time. In one embodiment, the fixed-position AR/VR viewer 16 may include at least two cameras 74 that may respectively correspond to respective viewpoints (e.g., right-eye and left-eye views) of the user 20. In one embodiment, the one or more cameras 74 may be high resolution and/or high speed cameras. For example, the one or more cameras 74 may be 4K resolution digital high speed cameras (e.g., frame rate in excess of about 60 frames per second and horizontal resolution of about 4,000 pixels). Because the one or more cameras 74 (e.g., the one or more cameras 74 are disposed on the fixed-position AR/VR viewer 16) have high speed and high resolution capabilities, the captured real-world images may have high resolution and high three-dimensional (3D) depth, which may help generate AR/VR graphics with a high level of realism. One or more orientation and positioning sensors 76 may capture data indicative of the viewing angle 49 of the fixedly positioned AR/VR viewer 16. The one or more light sensors 78 may be any suitable light sensor for detecting ambient light levels (e.g., the degree of brightness/darkness thereof).
The one or more presence sensors 80 may capture data indicative of the presence of objects (e.g., real world objects, people) that may block or enter the viewing angle 49 of the fixedly positioned AR/VR viewer 16. One or more presence sensors 80 may capture data indicative of the presence of user 20. In one embodiment, the fixed-position AR/VR viewer 16 may be activated (e.g., via the processor 90) and deactivated based on data captured by the one or more presence sensors 80. For example, if the fixed-position AR/VR viewer 16 is not in use (e.g., the presence of the user 20 is not detected), the fixed-position AR/VR viewer 16 may be deactivated to a sleep or standby mode, and in response to detecting the presence of the user 20, the fixed-position AR/VR viewer 16 may be activated from the sleep or standby mode. In one embodiment, one or more presence sensors 80 may be disposed on the guest side (next to the guest of user 20) and communicatively coupled (e.g., via a wired or wireless connection) to the fixed-position AR/VR viewer 16. The one or more antennas 82 may be Radio Frequency Identification (RFID) antennas 82 that are used to identify the user 20.
The user interface 68 (e.g., game control) may include any suitable input device (e.g., buttons, joysticks, rotators, knobs) to enable the user 20 to provide instructions relating to the operation of fixedly positioning the AR/VR viewer 16. For example, the user interface 68 may include a zoom control 84 (e.g., rotator, knob) configured to enable the user 20 to zoom in and out on features shown on the display 24 (e.g., real word features, AR/VR graphics 26). The user interface 68 may also include buttons 86, which buttons 86 may be configured to enable different actions and/or effects to be applied in the AR/VR environment 22. For example, the buttons 86 may enable the user 20 to control a character or object of the AR/VR graphics 26 to move in different directions (e.g., up, down, left, right) in the AR/VR environment 22. For example, the buttons 86 may enable the user 20 to make selections of the AR/VR graphic 26 or grab/release objects of the AR/VR graphic 26 in the AR/VR environment 22. In some embodiments, the data captured by the one or more orientation and positioning sensors 76 and/or the use of the user interface 68 may be used to analyze which attraction features (e.g., real world objects, AR/VR graphics 26) the user 20 spends the most time looking and/or interacting.
The rendering device 70 may be communicatively and operatively coupled to the computer graphics generation system 60 via the communication network 62 to receive signals or data corresponding to the rendering content and display the rendering content (e.g., AR/VR graphical images or video streams, AR/VR graphics 26) to create the AR/VR environment 22. The presentation device 70 may include a display 24 and an audio transducer 88 (e.g., a speaker). As set forth above, the display 24 may include one or more displays 46, such as two displays 46, one for each eye of the user 20 using the fixed position AR/VR viewer 16. Display 24 may also be configured such that guests next to user 20 may also see what is shown to user 20 on display 24. In one embodiment, the display 24 may be a 4K resolution display (e.g., approximately 4,000 pixels horizontal resolution). The audio transducer 88 may include any suitable device, such as one or more speakers, to present sound effects.
To support the creation of the AR/VR environment 22, the computer graphics generation system 60 may include processing circuitry, such as a processor 90 and memory 92. The processor 90 may be operatively coupled to the memory 92 to execute instructions for implementing the presently disclosed techniques for generating captured real-world images that are merged with the AR/VR graphics 26 to enhance the AR/VR experience of the user 20. The instructions may be encoded in a program or code that is stored in a tangible, non-transitory computer-readable medium, such as the memory 92 and/or other storage devices. Processor 90 may be a general purpose processor, a system on a chip (SoC) device, an Application Specific Integrated Circuit (ASIC), or some other similar processor configuration.
The computer graphics generation system 60 may include any suitable hardware, software (e.g., a game engine), and algorithms to enable a suitable AR/VR platform. For example, the computer graphics generation system 60 may store in memory 92 or access in the data server 64 a model (e.g., a three-dimensional model with spatial information, a computer-aided design (CAD) file) of the location of the subject attraction 12 and the fixed-position AR/VR viewer 16. In particular, the model may include positioning information of the fixed positioning AR/VR viewer 16 relative to the real world surroundings (e.g., the subject matter attraction 12). The model, along with other inputs from the data encoder 72 (e.g., encoded data from the user interface 68 and sensor 66 of the fixed position AR/VR viewer 16) are used to provide signals to the presentation device 70. In particular, as the user 20 operates the fixed-position AR/VR viewer 16 (e.g., changes the perspective 49, zooms in and out, turns on the button 86), the computer graphics generation system 60 is dynamically updated to generate and render AR/VR graphics 26, which are superimposed over the captured real-world images to create the AR/VR environment 22.
As can be appreciated, because the locations of the subject sight 12 and the fixed-position AR/VR viewer 16 are modeled, and the models (e.g., three-dimensional models with spatial information, computer-aided design (CAD) files) are stored in or accessible by the computer graphics generation system 60, the computer graphics generation system 60 may only need to determine the perspective 49 of the fixed-position AR/VR viewer 16 to determine where the user 20 is looking and to determine the proper overlay of the AR/VR graphics and the captured real-world images. Thus, the computer graphics generation system 60 may more efficiently (e.g., using less computing power) combine the AR/VR graphics 26 and the captured real-world images to generate the AR/VR environment 22. In particular, the computer graphics generation system 60 may efficiently generate and superimpose the AR/VR graphics 26 onto the captured real-world image such that the AR/VR graphics 26 and the real-world image are aligned with a high level of realism, thereby enabling the AR/VR graphics 26 to demonstrate how they would under normal conditions. For example, if the AR/VR graphics 26 should be completely or partially obscured by any real-world object (e.g., physical structure 14, guest, building, object in the real world) from the perspective 49 of the fixedly positioned AR/VR viewer 16, the computer graphics generation system 60 may generate the AR/VR graphics 26 that are completely or partially transparent. For example, the computer graphics generation system 60 may generate the AR/VR graphics 26 to overlay the real-world object such that the real-world object appears to no longer exist or to be deleted (e.g., the real-world object is completely or partially obscured with the AR/VR graphics 26).
Furthermore, because the AR/VR graphic 26 is generated and superimposed on the captured real-world image in substantially real-time as the real-world image is captured, the realism of the AR/VR graphic 26 may be enhanced. For example, as the user 20 zooms in via the zoom control 84 in the AR/VR environment 22, the computer graphics generation system 60 generates or updates the AR/VR graphics 26 based on optically magnified images captured by the one or more cameras 74 in substantially real-time, such that the AR/VR graphics 26 appear more realistic (as compared to AR/VR graphics 26 generated based on digitally magnified real-world images). In some embodiments, the AR/VR graphic 26 may also be generated to prompt the user 20 to select certain game options (e.g., select a character, select a team member, select a tool/utility corresponding to a game in the AR/VR environment 22), or to provide game cues to the user 20 (e.g., cues of where to explore, which elements to collect, etc.). The computer graphics generation system 60 may also generate and render sound effects via the audio transducer 88 to enhance the user's experience in the AR/VR environment 22.
In one embodiment, the computer graphics generation system 60 may generate the AR/VR environment 22 based on information related to the user 20 (e.g., transmitted via one or more antennas 82 and/or stored on the data server 64). For example, the computer graphics generation system 60 may display certain characters, tools/utilities, and/or game scenes in the AR/VR environment 22 based on the user's game history, game status, membership status, and the like. In one embodiment, the computer graphics generation system 60 may generate and render the AR/VR graphics 26 based on user input (e.g., based on signals from the zoom control 84 and/or buttons 86). For example, the computer graphics generation system 60 may display an enlarged or reduced image in the AR/VR environment 22 depending on the degree of zoom effect applied by the user 20 via the zoom control 84. In one example, the computer graphics generation system 60 may display the AR/VR graphics 26 to reflect game operations applied by the user 20 via the buttons 86. For example, the AR/VR graphics 26 may show that the object is moved or grabbed, corresponding to a move or grab function, in response to the button 86 being applied by the user 20. As can be appreciated, the zoom control 84 and buttons 86 may function as game controls or joysticks.
To enhance the realism of the generated environment, in certain embodiments, the computer graphics generation system 60 may generate the AR/VR graphics 26 based on the real-world physical environment (e.g., lighting information detected via one or more light sensors 78, information detected via one or more presence sensors 80). For example, based on data collected by the one or more light sensors 78, the computer graphics generation system 60 may determine that the real-world physical environment is dark (e.g., at night). In response to this determination, the computer graphics generation system 60 may reduce the brightness of the generated AR/VR graphics 26 so that the AR/VR environment 22 is presented to the user 20 at an appropriate brightness. For example, based on data collected by one or more light sensors 78, the computer graphics generation system 60 may determine that the real-world physical environment is too dark. In response to this determination, the computer graphics generation system 60 may increase the brightness of the captured real-world image prior to combining with the AR/VR graphics 26, such that the AR/VR environment 22 is presented to the user 20 at an appropriate brightness. For example, the computer graphics generation system 60 may process the encoded data from the one or more presence sensors 80 and determine that the perspective 49 of the fixed-position AR/VR viewer 16 may be blocked or restricted (e.g., blocked by a real-world object, a person). In response to determining that the perspective 49 may be blocked or restricted, the computer graphics generation system 60 may temporarily stop using the captured real world images from the one or more cameras 74 for generating the AR/VR graphics 26, but may instead use previously acquired real world images from the one or more cameras 74.
In some embodiments, to enhance the realism of the generated environment, the computer graphics generation system 60 may generate an AR graphic (e.g., AR/VR graphic 26) that includes real-time digital shadows. The computer graphics generation system 60 may generate real-time digital shadows for digital objects and AR objects based on the respective perspectives 49 and real-world lighting information relative to the real-world objects (e.g., lighting information detected via one or more light sensors 78, information detected via one or more presence sensors 80, time of day and/or day of year indicated by an internal clock of a calendar of the computer graphics generation system 60). For example, digital shadows for the barn room 28 and two horses 30 may be generated with a suitable shape and brightness determined based on the angles of incidence of the real world light source (such as the sun) and the real world lighting elements and based on whether light is blocked by a real world object or person relative to the perspective viewing angle 49.
Further, the computer graphics generation system 60 may be communicatively and operatively coupled to one or more remote viewing devices 65 via a communication network 62. One or more remote viewing devices 65 may include any suitable display (e.g., computer, video and/or audio display, computer) disposed within amusement park 10 or remotely from amusement park 10. The one or more remote viewing devices 65 may also be mobile devices (e.g., mobile phones, smart phones, and tablet devices) having an application portability profile or APP. The computer graphics generation system 60 may stream the generated AR/VR graphics 26 and/or sound effects to one or more remote viewing devices 65 so that a user viewing the one or more remote viewing devices 65 may see the same AR/VR graphics 26 as the user 20 stationary positioning the AR/VR viewer 16 and/or hear the same sound effects as the user 20.
Further, the computer graphics generation system 60 may switch between the AR environment and the VR environment in the AR/VR environment 22 based on user operation of the zoom control device 84. Fig. 4 is a schematic diagram illustrating an example transition between AR and VR environments in accordance with aspects of the present disclosure. In the illustrated embodiment, a user 20 viewing the subject attraction 12 through the display 24 may see an AR/VR graphic 26 that includes only the AR graphic 23. The user 20 may operate the fixed-position AR/VR viewer 16 (e.g., via the zoom control 84) to zoom in on one of the AR features 25 (e.g., a horse) to see details of the AR feature 25, as indicated in step 100. The user 20 may zoom in further as indicated in steps 102 and 104 to see a more magnified view of the AR feature 25 with more detail. As may be appreciated, the user 20 may operate the zoom control 84 in the opposite direction to zoom out from the AR feature 25. In another embodiment, the user 20 may operate the fixed-position AR/VR viewer 16 (e.g., via the zoom control 84) to zoom in on another one of the AR features 27 (e.g., a barn door) to see the zoomed-in details of the AR feature 27, as indicated in step 106. As indicated in step 108, user 20 may zoom in further to see a further enlarged view of AR feature 27. As the user 20 continues to zoom in, as indicated in step 110 (e.g., zooming beyond a predetermined magnification threshold), the computer graphics generation system 60 may generate the VR graphics 29 such that the user's experience transitions from an AR experience to a VR experience. For example, in step 110, the user's experience transitions to a VR environment, and the user 20 may enjoy the VR graphics 29 as if the user 20 were inside a barn room surrounded by barn animals. As can be appreciated, the user 20 may operate the zoom control 84 in the opposite direction to transition back from the VR experience to the AR experience to zoom out on the AR feature 27, or zoom out on any VR feature.
In one embodiment, the computer graphics generation system 60 may be communicatively and operatively coupled to a plurality of fixed-position AR/VR viewers 16 to enable multiple users 20 to participate in the same game and/or to see actions applied by other users 20. Fig. 5 is a schematic diagram illustrating an example of such connectivity/participation (engagment) between a plurality of fixed-location AR/VR viewers 16 in an AR/VR environment 22, in accordance with aspects of the present disclosure. In the illustrated example, the computer graphics generation system 60 is communicatively and operatively coupled to a first fixed-position AR/VR viewer 120 operated by a first user 122 and a second fixed-position AR/VR viewer 124 operated by a second user 126. Both the first user 122 and the second user 126 may see the same AR/VR feature 128 shown on the respective display 24, but from different perspectives. Further, both the first user 122 and the second user 126 may see the actions 129 applied by either of the first user 122 and the second user 126 (e.g., actions in the AR/VR environment 22) on the respective displays 24. In the illustrated example, the first user 122 operates the first fixed position AR/VR viewer 120 to perform an action 129, such as filling a water tank 130 for feeding a barn animal 132, as shown in area 134. For example, the first user 122 may adjust the respective viewing angle 49 to aim the water tank 130, zoom in or out on the water tank 130 using the zoom control 84, and press one of the buttons 86 to begin filling the water tank 130. Correspondingly, second user 126 may see actions 129 applied by first user 122 (e.g., filling water tank 130) from respective display 24. In one embodiment, the computer graphics generating system 60 may determine that the second user 126 is also viewing the area 134 (e.g., the respective perspective 49 of the second fixed position AR/VR viewer 124 overlaps a portion of the area 134), and in response to this determination, the computer graphics generating system 60 may display the same AR/VR features 128 that include the results of act 129 on the respective displays 24 of the first and second fixed position AR/VR viewers 120, 124. In another embodiment, the computer graphics generating system 60 may determine that the second user 126 is engaged in the same game as the first user 122 (e.g., the second user 126 may provide an indication using the user interface 68 to agree to join the game with the first user 122), and in response to this determination, the computer graphics generating system 60 may display the same AR/VR features 128 including the results of the action 129 on the respective displays 24 of the first fixed-position AR/VR viewer 120 and the second fixed-position AR/VR viewer 124.
FIG. 6 is a process flow diagram illustrating an embodiment of a method 140 of using the AR/VR system 15 to create an AR experience, VR experience, and/or other computing mediated experience. The method 140 may represent boot code or instructions stored in a non-transitory computer-readable medium (e.g., memory 92) and executed, for example, by the processor 90 of the computer graphics generation system 60. One or more users 20 and/or other guests may enjoy the generated AR experience, VR experience, and/or other computer-mediated experience using one or more fixedly-positioned AR/VR viewers 16 and/or one or more remote viewing devices 65. The method 140 may begin with the processor 90 receiving and analyzing (block 142) real-time data from the sensor 66 and the user interface 68 of each of the one or more fixed position viewers 16. The real-time data may include pictures and/or video captured by the one or more cameras 74, orientation and/or positioning information and/or viewing angles 49 detected by the one or more orientation and positioning sensors 76, lighting information detected by the one or more light sensors 78, information indicating the presence of an object or user in the vicinity of the fixed position viewer 16 detected by the one or more presence sensors 80, information indicating the identity of the user received by the one or more antennas 82, and so forth. The real-time data may also include input (e.g., command and/or operational input) provided by the user 20 via the zoom control 84 and/or the buttons 86 using the user interface 68.
The method 140 may then continue with the processor 90 generating (block 144) a game effect. In one embodiment, the game effect may include AR/VR image data and sound data generated based on the received and analyzed real-time data. For example, the game effect includes specific AR/VR image data related to a game character associated with the user 20. For example, depending on the viewing angle 49 of the fixedly positioned AR/VR viewer 16 adjusted by the user 20 (e.g., depending on the attention/viewing interest of the user 20), the game effect includes certain AR/VR features. For example, the game effect may include a transition between AR and VR environments, depending on the zoom effect applied by the user 20 (e.g., zoom in beyond a certain threshold to transition from the AR environment to the VR environment). As a further example, the game effect may include coordinated AR/VR image data for multiple fixed-position AR/VR viewers 16 in a manner that multiple users 20 may share the same game experience. The game effect may also include sound data corresponding to the AR/VR image data.
The method 140 may then continue with: the processor 90 overlays (block 146) or superimposes the generated game effect onto the generated visualization of the real world environment. The processor 90 may generate a video data stream of real-world images (e.g., the solid structure 14, the barn room 28, shown in fig. 1) and superimpose or superimpose the AR/VR graphics 26 (e.g., the two horses 30 shown in fig. 1) onto the real-world images using one or more video and/or optical merging techniques. As an example, the processor 90 of the graphics-generating system 60 may render the AR/VR graphics 26 consistent with the operation of the user 20 to fixedly position the AR/VR viewer 16 to view certain features (e.g., based on the perspective 49) or after a predetermined time has elapsed. The graphics generation system 60 may perform one or more geometric or photometric recognition algorithms on video or image data captured via the one or more cameras 74 to determine the perspective 49 and the time at which the AR/VR graphics 26 were introduced. The graph generation system 60 may determine when to introduce the AR/VR graph 26 based on input provided by the user 20 via the user interface 68.
The method 140 may then end with: the processor 90 transmits (block 148) the game effect (e.g., overlaid AR/VR graphics 26 along with real world environment data and/or sound effects) and displays (block 150) on the display 24 of the respective one or more fixedly positioned AR/VR viewers 16 and/or on one or more remote viewing devices 65.
While only certain features of the present embodiments have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the disclosure. Further, it is to be understood that certain elements of the disclosed embodiments may be combined with or interchanged with one another.
Claims (21)
1. A mixed reality viewing system, comprising:
a viewer configured to be secured to a stable platform and operable by a user to view a subject through the viewer, wherein the viewer comprises:
a display device;
a user interface including a zoom control; and
at least one sensor comprising at least one camera;
and
a computer graphics generation system communicatively coupled to the viewer and configured to:
generating streaming media of a real-world environment based on image data captured via at least one camera of the viewer;
generating augmented reality graphics, virtual reality graphics, or both, overlaid on streaming media of the real-world environment; and
transmitting streaming media of the real-world environment with the superimposed augmented reality graphics, virtual reality graphics, or both, to be displayed on a display device of the viewer.
2. The mixed reality viewing system of claim 1, wherein the computer graphics generation system is configured to:
in response to determining that the zoom control is zoomed to a first magnification, transmitting the streaming media of the real-world environment with the superimposed augmented reality graphic; and
in response to determining that the zoom control is zoomed to a second magnification that is higher than the first magnification, transmitting the expanded view of the real-world environment and the superimposed virtual reality graphic.
3. The mixed reality viewing system of claim 1, wherein the viewer comprises:
a viewer portion including the display device and the user interface, wherein the viewer portion is rotatable by a user to adjust a viewing angle; and
a fixture portion rotatably coupled to the viewer portion and fixedly coupling the viewer to the stabilization platform.
4. The mixed reality viewing system of claim 1, wherein the display device of the viewer comprises a first display and a second display, and wherein the first display is configured to display the streaming media to a first eye of the user and the second display is configured to display the streaming media to a second eye of the user.
5. The mixed reality viewing system of claim 1, wherein the display device is configured such that guests other than the user may also see the streaming media.
6. The mixed reality viewing system of claim 1, wherein the viewer comprises a speaker.
7. The mixed reality viewing system of claim 1, wherein each of the at least one camera of the viewer comprises a digital high speed camera having a horizontal resolution of about 4000 pixels.
8. The mixed reality viewing system of claim 1, wherein the at least one sensor comprises one or more orientation and positioning sensors, one or more multiple degree of freedom inertial measurement units, one or more light sensors, one or more presence sensors, one or more antennas, or a combination thereof.
9. The mixed reality viewing system of claim 1, comprising a remote display device, wherein the computer graphics generation system is configured to transmit streaming media of the real-world environment together with the superimposed augmented reality graphics, virtual reality graphics, or both, to be displayed on the remote display device.
10. The mixed reality viewing system of claim 1, wherein the computer graphics generation system has a model of a real-world environment surrounding the viewer, wherein the model includes spatial information of the viewer relative to the real-world environment.
11. A fixed position viewer, comprising:
a display configured to display streaming media to a user, wherein the displayed streaming media comprises a Mixed Reality (MR) environment comprising Augmented Reality (AR) graphics, Virtual Reality (VR) graphics, or both;
a camera configured to capture image data of a real-world environment surrounding the fixed position viewer; and
at least one sensor configured to collect information related to the generation of the streaming media.
12. A fixed position viewer as defined in claim 11, comprising:
a user interface comprising a zoom control configured to enable a user to zoom in and out on the displayed streaming media, wherein the fixed position viewer is communicatively coupled to a computer graphics generation system to:
transmitting the captured image data and information collected by the sensor to the computer graphics generation system;
receiving streaming media rendered by the computer graphics generation system, the rendering based at least on captured image data of a real-world environment and information collected by the sensor; and
in response to determining that the zoom control is zoomed beyond a predetermined threshold magnification, transitioning the displayed streaming media from the AR environment to the VR environment.
13. The fixed position viewer of claim 11, wherein the camera comprises a digital high speed camera having a frame rate of about 60 frames per second and a horizontal resolution of about 4000 pixels.
14. The fixed position viewer of claim 11, comprising one or more speakers configured to provide audio effects corresponding to the displayed streaming media.
15. The fixed positioning viewer of claim 11, wherein the at least one sensor comprises one or more orientation and positioning sensors, one or more multiple degree of freedom inertial measurement units, one or more light sensors, one or more presence sensors, one or more antennas, or a combination thereof.
16. The fixed position viewer of claim 11, wherein the display is configured such that one or more guests other than a user of the fixed position viewer can view the displayed streaming media.
17. The fixed position viewer of claim 11, wherein the information collected by the at least one sensor comprises user information, and wherein at least some content of the displayed streaming media is customized for the user.
18. A method, comprising:
receiving and analyzing real-time data via a computer graphics generation system, wherein receiving the real-time data comprises receiving data from a user interface of a stationary positioning viewer and at least one sensor;
generating, via the computer graphics generation system, a game effect based on the received real-time data, wherein the game effect comprises Augmented Reality (AR) graphics, Virtual Reality (VR) graphics, or both;
superimposing, via computer graphics generation, the generated game effect onto a visualization of the real-world environment to produce a mixed reality visualization;
communicating, via the computer graphics generation system, the mixed reality visualization to the fixed positioning viewer; and
displaying, via the computer graphics generation system, the mixed reality visualization on a display of the fixed positioning viewer.
19. The method of claim 18, wherein generating game effects via the computer graphics generation system comprises:
in response to determining that a zoom control of the user interface is zoomed to a first magnification, generating, via the computer graphics generation system, an AR graphic, and the AR graphic is superimposed; and
generating, via the computer graphics generation system, a VR graphic in response to determining that a zoom control of the user interface is zoomed to a second magnification, wherein the second magnification is greater than the first magnification, and the VR graphic replaces a visualization of the real-world environment.
20. The method of claim 18, wherein receiving the real-time data comprises: receiving, via the computer graphics generation system, data collected via at least one or more orientation and positioning sensors, one or more multiple degree of freedom inertial measurement units, one or more light sensors, one or more presence sensors, or one or more antennas of the fixed positioning viewer.
21. The method of claim 18, comprising:
transmitting, via the computer graphics generation system, another mixed reality visualization to an additional fixed positioning viewer, wherein the other mixed reality visualization comprises: the same visual content from the perspective of the additional fixed positioned viewer as the mixed reality visualization displayed on the fixed positioned viewer; and
displaying, via the computer graphics generation system, another mixed reality visualization on a respective display of the additional fixed positioning viewer.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US62/467817 | 2017-03-06 | ||
| US15/818463 | 2017-11-20 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK40016105A true HK40016105A (en) | 2020-09-04 |
| HK40016105B HK40016105B (en) | 2024-06-14 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110382066B (en) | Mixed reality viewer systems and methods | |
| US12079942B2 (en) | Augmented and virtual reality | |
| CN114327700B (en) | Virtual reality device and screenshot picture playing method | |
| US10845942B2 (en) | Information processing device and information processing method | |
| US20200225737A1 (en) | Method, apparatus and system providing alternative reality environment | |
| CN119888046A (en) | Image processing method, apparatus, storage medium, device, and program product | |
| US20240037837A1 (en) | Automatic graphics quality downgrading in a three-dimensional virtual environment | |
| JP7702675B2 (en) | Video Display System | |
| CN102118576B (en) | Method and device for color key synthesis in virtual sports system | |
| CA3054671C (en) | Mixed reality viewer system and method | |
| CN120070719A (en) | Three-dimensional scene reconstruction method, device, equipment and storage medium | |
| HK40016105A (en) | Mixed reality viewer system and method | |
| CN117278820A (en) | Video generation method, device, equipment and storage medium | |
| HK40016105B (en) | Mixed reality viewer system and method | |
| JP2025083679A (en) | Information processing device, information processing system, and control method and program thereof | |
| HK40091522A (en) | Image processing method, information processing apparatus, computer program, and storage medium | |
| McCurdy | RealityFlythrough: A system for ubiquitous video |