CN116645493A - Method, device and medium for presenting augmented reality data - Google Patents
Method, device and medium for presenting augmented reality data Download PDFInfo
- Publication number
- CN116645493A CN116645493A CN202310671831.0A CN202310671831A CN116645493A CN 116645493 A CN116645493 A CN 116645493A CN 202310671831 A CN202310671831 A CN 202310671831A CN 116645493 A CN116645493 A CN 116645493A
- Authority
- CN
- China
- Prior art keywords
- scene
- data information
- target
- anchor point
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating three-dimensional [3D] models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application aims to provide a method, equipment and medium for presenting augmented reality data, wherein the method comprises the following steps: obtaining augmented reality data, wherein the augmented reality data comprises anchor point data information corresponding to at least one anchor point, scene data information corresponding to at least one scene and object data information corresponding to at least one augmented reality presentation information, wherein an association relationship exists between the anchor point data information and the scene data information, and an inclusion relationship exists between the scene data information and the object data information; identifying the at least one anchor point in the real environment according to the anchor point data information; if a target anchor point is identified, determining a target scene associated with the target anchor point according to the scene data information; and according to the object data information, presenting at least one piece of augmented reality presentation information contained in the target scene.
Description
Technical Field
The present application relates to the field of communications, and in particular to a technique for presenting augmented reality data.
Background
In recent years, the scientific technology has rapidly developed, and the AR (Augmented Reality ) technology has matured and gradually moved into the field of view of people, so that an AR application can be expressed by the augmented reality data, and how to present the augmented reality data to meet the AR requirement of the user has become a topic of interest.
Disclosure of Invention
It is an object of the present application to provide a method, apparatus and medium for presenting augmented reality data.
According to one aspect of the present application, there is provided a method for presenting augmented reality data, the method comprising:
obtaining augmented reality data, wherein the augmented reality data comprises anchor point data information corresponding to at least one anchor point, scene data information corresponding to at least one scene and object data information corresponding to at least one augmented reality presentation information, wherein an association relationship exists between the anchor point data information and the scene data information, and an inclusion relationship exists between the scene data information and the object data information;
identifying the at least one anchor point in the real environment according to the anchor point data information;
if a target anchor point is identified, determining a target scene associated with the target anchor point according to the scene data information;
And according to the object data information, presenting at least one piece of augmented reality presentation information contained in the target scene.
According to one aspect of the present application there is provided a computer device for presenting augmented reality data, the device comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
obtaining augmented reality data, wherein the augmented reality data comprises anchor point data information corresponding to at least one anchor point, scene data information corresponding to at least one scene and object data information corresponding to at least one augmented reality presentation information, wherein an association relationship exists between the anchor point data information and the scene data information, and an inclusion relationship exists between the scene data information and the object data information;
identifying the at least one anchor point in the real environment according to the anchor point data information;
if a target anchor point is identified, determining a target scene associated with the target anchor point according to the scene data information;
and according to the object data information, presenting at least one piece of augmented reality presentation information contained in the target scene.
According to one aspect of the application, there is provided a computer readable medium storing instructions that, when executed, cause a system to:
obtaining augmented reality data, wherein the augmented reality data comprises anchor point data information corresponding to at least one anchor point, scene data information corresponding to at least one scene and object data information corresponding to at least one augmented reality presentation information, wherein an association relationship exists between the anchor point data information and the scene data information, and an inclusion relationship exists between the scene data information and the object data information;
identifying the at least one anchor point in the real environment according to the anchor point data information;
if a target anchor point is identified, determining a target scene associated with the target anchor point according to the scene data information;
and according to the object data information, presenting at least one piece of augmented reality presentation information contained in the target scene.
According to one aspect of the present application there is provided a computer device for presenting augmented reality data, the device comprising:
the system comprises a one-to-one module, a one-to-one module and a one-to-one module, wherein the augmented reality data comprises anchor point data information corresponding to at least one anchor point, scene data information corresponding to at least one scene and object data information corresponding to at least one augmented reality presentation information, an association relationship exists between the anchor point data information and the scene data information, and an inclusion relationship exists between the scene data information and the object data information;
The two-module is used for identifying the at least one anchor point in the real environment according to the anchor point data information;
the three modules are used for determining a target scene associated with the target anchor point according to the scene data information if the target anchor point is identified;
and the four modules are used for presenting at least one piece of augmented reality presentation information contained in the target scene according to the object data information.
Compared with the prior art, the method and the device have the advantages that the augmented reality data are obtained, wherein the augmented reality data comprise anchor point data information corresponding to at least one anchor point, scene data information corresponding to at least one scene and object data information corresponding to at least one augmented reality presentation information, an association relationship exists between the anchor point data information and the scene data information, and an inclusion relationship exists between the scene data information and the object data information; identifying the at least one anchor point in the real environment according to the anchor point data information; if a target anchor point is identified, determining a target scene associated with the target anchor point according to the scene data information; according to the object data information, at least one piece of augmented reality presentation information contained in the target scene is presented, so that after the augmented reality data used for expressing one AR application is generated, the AR content can be presented by using the augmented reality data, AR requirements of a user can be conveniently and rapidly met, differences among different APP of different rendering engines at different ends can be bridged, the augmented reality data can be used in different APP, and effective utilization of resources is facilitated.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow chart of a method for presenting augmented reality data according to one embodiment of the application;
FIG. 2 illustrates a block diagram of a computer device for presenting augmented reality data according to one embodiment of the application;
FIG. 3 illustrates an exemplary system that may be used to implement various embodiments described in the present application.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
The application is described in further detail below with reference to the accompanying drawings.
In one exemplary configuration of the application, the terminal, the device of the service network, and the trusted party each include one or more processors (e.g., central processing units (Central Processing Unit, CPU)), input/output interfaces, network interfaces, and memory.
The Memory may include non-volatile Memory in a computer readable medium, random access Memory (Random Access Memory, RAM) and/or non-volatile Memory, etc., such as Read Only Memory (ROM) or Flash Memory (Flash Memory). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase-Change Memory (PCM), programmable Random Access Memory (Programmable Random Access Memory, PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (Dynamic Random Access Memory, DRAM), other types of Random Access Memory (RAM), read-Only Memory (ROM), electrically erasable programmable read-Only Memory (EEPROM), flash Memory or other Memory technology, read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disks (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device.
The device includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product which can perform man-machine interaction with a user (for example, perform man-machine interaction through a touch pad), such as a smart phone, a tablet computer and the like, and the mobile electronic product can adopt any operating system, such as an Android operating system, an iOS operating system and the like. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a digital signal processor (Digital Signal Processor, DSP), an embedded device, and the like. The network device includes, but is not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud of servers; here, the Cloud is composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, a virtual supercomputer composed of a group of loosely coupled computer sets. Including but not limited to the internet, wide area networks, metropolitan area networks, local area networks, VPN networks, wireless Ad Hoc networks (Ad Hoc networks), and the like. Preferably, the device may be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the above-described devices are merely examples, and that other devices now known or hereafter may be present as applicable to the present application, and are intended to be within the scope of the present application and are incorporated herein by reference.
In the description of the present application, the meaning of "a plurality" is two or more unless explicitly defined otherwise.
Fig. 1 shows a flow chart of a method for presenting augmented reality data according to an embodiment of the application, the method comprising step S11, step S12, step S13 and step S14. In step S11, the computer device acquires augmented reality data, where the augmented reality data includes anchor point data information corresponding to at least one anchor point, scene data information corresponding to at least one scene, and object data information corresponding to at least one augmented reality presentation information, where an association exists between the anchor point data information and the scene data information, and an inclusion exists between the scene data information and the object data information; in step S12, the computer device identifies the at least one anchor point in the real environment according to the anchor point data information; in step S13, if the computer device identifies a target anchor point, determining a target scene associated with the target anchor point according to the scene data information; in step S14, at least one augmented reality presentation information contained in the target scene is presented according to the object data information.
In step S11, the computer device acquires augmented reality data, where the augmented reality data includes anchor point data information corresponding to at least one anchor point, scene data information corresponding to at least one scene, and object data information corresponding to at least one augmented reality presentation information, where an association exists between the anchor point data information and the scene data information, and an inclusion exists between the scene data information and the object data information. In some embodiments, the augmented reality data is a data format generated based on predetermined AR description criteria including, but not limited to: (1) Describing the manner in which various digital objects exist in space (including, but not limited to, one or more of position, angle, size, follow, orientation, follow strategy, whether hidden, etc.); (2) Defining one or more of presentation properties, events, actions of the digital object, such as actions of whether video is played circularly, ioT data refresh frequency, rotation/disassembly/composition of the 3D model, etc.; (3) Defining relationships between digital objects, such as label follow-up, event association; (4) Defining interaction logic including human interaction with the digital object, attribute modification of the digital object, event triggering, and the like; (5) One or more of the relationships of the anchor point and the virtual scene are described. The standard is not only a file format, but also a delivery format of data content in API call, and can provide efficient, extensible and interoperable formats for content transmission and loading required by AR, so that differences among different rendering engines at different ends are bridged, effective utilization of resources is facilitated, and repeated development is avoided. In some embodiments, the augmented reality data is used to describe how to augment the rich interactable AR material in reality, and can be broken down into the following three points: how to add AR material using AR means; what AR materials are, how to present; how the AR materials are related to each other and interact with reality. In some embodiments, an AR application may be expressed by the augmented reality data. In some embodiments, the augmented reality data is purely a data content in a predetermined format, not mandatory for any running environment, which makes it available for any application, including using any rendering technology.
In some embodiments, an anchor (AR anchor) is used to express an association between a physical space and a digital space, and the anchor data information includes, but is not limited to, one or more of uuid (anchor identification information, globally unique number), name (name), type (different anchor types correspond to different recognition algorithms), url (resource address of anchor), snapshotImage (guide map address of anchor), width (width of anchor corresponds to physical space), height (height of anchor corresponds to physical space), attributes (attribute of anchor, different anchor types may correspond to different attributes), etc., those skilled in the art should understand that the anchor data information may include only one item of uuid, name, type, url, snapshotImage, width, height, attributes, or may include a combination of items of uuid, name, type, url, snapshotImage, width, height, attributes, and is not limited thereto.
In some embodiments, a scene (virtual scene) is an entity containing AR material objects to be rendered, i.e. an object set of AR material to be rendered, and the scene data information includes, but is not limited to, one or more of uuid (scene identification information, globally unique number), name (name), hrsObjectIds (uuid array of objects), translation (three-dimensional position information), rotation (rotation angle), scale (scaling), follow strategy, face camera (whether facing the camera), hidden (whether hidden), visual range (visual range), subscreen scene (set of sub-scene arrays representing sub-scenes), etc., where the follow strategy determines how an entity (anchor point, scene, object, action, event, etc. in augmented reality data may be called an entity) is displayed in a spatial position, and the follow strategy includes, but is not limited to, a follow strategy type, a uv of an entity, an offset of a position of a screen, an alignment of a screen level, a screen level alignment, or at least one or more of the following strategy types, and the like: a follow space (position relative to the space coordinate system), a follow screen (entity always displayed on screen), a follow camera (position relative to the camera can be represented by a transformation), a follow object (position relative to a certain object can be represented by a transformation), a follow link/anchor point (position relative to a certain anchor point can be represented by translation, rotation and scale); when the faceCamera is true, the entity adjusts the angle of the entity along with the camera so as to keep the front face always facing the camera; when hidden is true, the entity does not render; visual range indicates the distance from which the entity can be seen, i.e., the camera is less than visual range from the entity; the sub-scenes constitute a subset of the AR material objects to be rendered, one sub-scene only being present in one scene. In some embodiments, the scene is triggered by an AR anchor, there is an association between anchor data information and scene data information, one anchor data information may have an association with at least one scene data information, one scene data information may also have an association with at least one anchor data information, i.e. one anchor may establish an association with at least one scene, one scene may also establish an association with at least one anchor, the association is embodied in the augmented reality data in the anchor data information, the scene identification information (e.g. uuid) of the anchor data information associated therewith is included in the anchor data information, or may also be the anchor identification information (e.g. uuid) of the anchor data information associated therewith is included in the scene data information.
In some embodiments, the types of augmented reality presentation information (i.e., AR material objects) include, but are not limited to, one or more of 3D models, text, pictures, audio, video, web pages, PDF documents, applications, points, lines, polygons, ellipses, freebrushes, and the like. In some embodiments, the object data information has an inclusion relationship with the scene data information, and one scene data information may include at least one object data information, that is, one scene data information may include at least one augmented reality presentation information, that is, the scene data information may include some or all of the object data information, such as object identification information. In some embodiments, the augmented reality presentation information is a minimum unit to be rendered in the scene, the object data information defines a specific function and a single entity of its position angle scaling, and the object data information includes, but is not limited to, one or more of uuid (object identification information, globally unique number), name (name), translation (three-dimensional position information), rotation (rotation angle), scale (scaling), follow strategy, faceCamera (whether facing the camera), hidden (whether hidden), type (type of object), uri (resource address of object), visual range (visual distance), attributes (attribute set of object), etc., wherein the type of object determines the rendering effect, presentation attribute, specific action of the object.
In step S12, the computer device identifies the at least one anchor point in the real environment based on the anchor point data information. In some embodiments, the anchor data information includes at least one anchor resource corresponding to the anchor or address information of the anchor resource, through which the corresponding anchor resource (such as a two-dimensional code, an identification map, a real environment, a feature point, a point cloud, location information, wireless signal information, etc. in a physical space) can be obtained. In some embodiments, anchor point identification is performed on a real environment (e.g., a camera picture) according to anchor point resources corresponding to at least one anchor point and/or other information in anchor point data information, and whether the at least one anchor point exists in the real environment is identified. In some embodiments, if the number of the anchors stored in the database is greater than or equal to the predetermined number threshold, the anchors stored in the database may be divided into a plurality of groups, for example, groups are performed according to scenes, projects, positions, and the like, each group includes at least one anchor, when the anchors are identified, the corresponding group may be selected/identified, and then the anchors are identified for the real environment according to the groups, so as to improve the efficiency of identifying the anchors. For example, the database stores the anchors created at two positions (such as the factory a and the factory B), the anchors stored in the database can be divided into two groups according to the position information, when the user performs anchor identification at the factory a, the user only needs to search and identify the information scanned by the user on the real environment (for example, the camera picture) and the anchor set corresponding to the factory a stored in the database, and the search and identification of all anchors created by the two factories stored in the database are not needed, so that the identification efficiency is improved.
In step S13, if the computer device identifies a target anchor point, determining a target scene associated with the target anchor point according to the scene data information. In some embodiments, if it is identified that a target anchor point in the at least one anchor point exists in the real environment, determining target scene data information with an association relation with anchor point data information corresponding to the target anchor point according to the scene data information, taking a scene corresponding to the target scene data information as a target scene associated with the target anchor point, wherein if anchor point identification information of the anchor point data information is included in the scene data information, matching the at least one scene data information corresponding to the at least one scene according to the anchor point identification information of the target anchor point to obtain the target scene data information containing the anchor point identification information, taking the scene corresponding to the target scene data information as a target scene associated with the target anchor point, or if the anchor point data information includes the scene identification information of the scene data information, determining the target scene data information containing the scene identification information in the at least one scene data information corresponding to the scene according to the scene identification information in the anchor point data information corresponding to the target anchor point data information, and taking the scene corresponding to the target scene data information as the target scene associated with the target anchor point.
In step S14, at least one augmented reality presentation information contained in the target scene is presented according to the object data information. In some embodiments, the scene data information corresponding to a scene may include object identification information of object data information corresponding to at least one piece of augmented reality presentation information included in the scene, and at least one piece of object data information including the at least one piece of object identification information is determined from the object data information corresponding to the at least one piece of augmented reality presentation information according to the at least one piece of object identification information included in the scene data information corresponding to the target scene. In some embodiments, the object data information includes an object resource (i.e., an AR material resource) or address information of the object resource, through which the corresponding object resource can be acquired. In some embodiments, a position of at least one augmented reality presentation information in a camera real scene shot by a computer device is determined according to object data information corresponding to at least one augmented reality presentation information contained in a target scene associated with the target anchor point and real-time pose information of the computer device, and the at least one augmented reality presentation information is presented in a superposition manner at the position, in other embodiments, the at least one augmented reality presentation information is presented at the position or a nearby area of the position based on position information of the target anchor point (such as a position of the target anchor point identified in a real environment). In some embodiments, the scene data information corresponding to the target scene includes local spatial conversion attribute (for example, conversion (three-dimensional position information) attribute, rotation (four element group representing rotation angle) attribute, scale (scale representing three-dimensional) attribute) of the target scene, according to which local spatial conversion of the target scene is required, and the at least one augmented reality presentation information is presented in the converted target scene. In some embodiments, the object data information corresponding to each piece of augmented reality presentation information includes a local spatial conversion attribute of the object, and local spatial conversion needs to be performed on each object according to the local spatial conversion attribute, and the at least one piece of augmented reality presentation information after conversion is presented in the target scene.
In some embodiments, the anchor data information includes an anchor type; wherein the anchor point type includes any one of the following: a picture; picture feature points; a point cloud; a point cloud map; a two-dimensional code; a cylinder; a cube; a geographic location; a face; a bone; a wireless signal. In some embodiments, different anchor types may correspond to different attributes, such as when the anchor type is a point cloud, the attributes include, but are not limited to, one or more of a corresponding algorithm name, algorithm version, a corresponding live-action screenshot, live-action screenshot address, and when the anchor type is a geographic location, the attributes include, but are not limited to, one or more of GIS coordinates, longitude, latitude, altitude. Optionally, in some embodiments, the anchor data information further includes anchor resources (e.g., a picture file, a feature point file corresponding to a picture, a point cloud file, etc.) or storage address information of the anchor resources (e.g., a picture file address, a feature point file address corresponding to a picture, a point cloud file address, etc.). In some embodiments, wireless communications include, but are not limited to, bluetooth, wiFi, zigbee, and the like.
In some embodiments, the augmented reality data further includes at least one link data information describing an association between the anchor data information and the scene data information; wherein the determining the target scene associated with the target anchor point comprises: and determining a target scene associated with the target anchor point according to the at least one link data information. In some embodiments, the link data information defines how to use an anchor point, where the link data information is used to describe association and/or location of a scene with the anchor point, for example, the link data information includes identification information of the anchor point, identification information of a scene associated with the anchor point, and/or relative location information of the scene with respect to the anchor point, and the link data information defines where in real space the scene should be presented. In some embodiments, the link data information includes, but is not limited to, one or more of uuid (link identification information, globally unique number), name, type (link type), uuidarabnow (uuid of AR anchor point, indicating which anchor point the link is describing, a method of use of), translation (three-dimensional position information, relative to a scene coordinate system), rotation (rotation angle, relative to a scene coordinate system), scale (relative to a scene coordinate system), visualization (uuid of a scene associated with the anchor point, indicating a scene uuid triggered by the anchor point), conditions (condition array, describing conditions under which the link is valid).
In some embodiments, each link data information includes anchor point identification information of anchor point data information corresponding to an anchor point and scene identification information of scene data information corresponding to a scene associated with the anchor point; wherein the determining, according to the at least one link data information, a target scene associated with the target anchor point includes: determining target link data information corresponding to the target anchor point from the at least one link data information; and determining a target scene associated with the target anchor point according to the target link data information. In some embodiments, each link data information includes anchor point identification information of anchor point data information corresponding to an anchor point and scene identification information of scene data information corresponding to a scene associated with the anchor point, the target link data information including the anchor point identification information is obtained by matching at least one link data information according to the anchor point identification information of the anchor point data information corresponding to the target anchor point, and then the scene corresponding to the scene identification information is used as a target scene associated with the target anchor point according to the scene identification information in the target link data information.
In some embodiments, each link data information further includes a link type, where the link type includes any one of a scene type and a tracking type; wherein the determining, according to the target link data information, a target scene associated with the target anchor point includes: and determining a target scene associated with the target anchor point and presentation attribute information corresponding to the target scene according to the target link data information. In some embodiments, the presentation attribute information is used to characterize how the target scene is presented based on the target anchor, e.g., the presentation attribute information may refer to whether the target scene moves tracking movement of the target anchor. In some embodiments, if the link type is a scene type, it indicates that the target scene will be automatically loaded when the target anchor point is matched, the movement of the subsequent target anchor point will not affect the target scene, if the link type is a track tracking type, it indicates that the target scene will track the target anchor point, the target scene will be loaded only when the target anchor point appears, and the subsequent target scene will move following the movement of the target anchor point.
In some embodiments, each link data information further includes a pose relationship between the anchor point and a scene associated with the anchor point; wherein the determining, according to the target link data information, a target scene associated with the target anchor point includes: and determining a target scene associated with the target anchor point and presentation position information corresponding to the target scene according to the target link data information. In some embodiments, the pose relationship includes, but is not limited to, at least one of translation (three-dimensional position information, relative to a scene coordinate system), rotation (four element groups that describe rotation angles, relative to a scene coordinate system), scale (representing a three-dimensional scale, relative to a scene coordinate system). In some embodiments, based on the position information of the target anchor point and the pose relationship, a presentation position of the target scene associated with the target anchor point may be determined to present the target scene at the presentation position, that is, at least one augmented reality presentation information contained in the target scene is presented at the presentation position.
In some embodiments, the scene data information corresponding to each scene includes object identification information of object data information included in the scene. In some embodiments, the scene data information corresponding to each scene includes object identification information (e.g., uuid) of at least one object data information included in the scene, where the object identification information may be included in the scene data information in the form of an array (e.g., hrsObjectids (uuid array of objects)), which is not limited herein.
In some embodiments, the at least one scene includes at least one parent scene and at least one sub-scene, and the scene data information corresponding to each parent scene further includes scene identification information or scene name information of at least one sub-scene included in the parent scene; the target scene comprises a target father scene and at least one target son scene; wherein the determining, according to the scene data information, the target scene associated with the target anchor point includes: determining a target parent scene associated with the target anchor point according to the scene data information; and determining at least one target sub-scene corresponding to the target parent scene according to the scene data information corresponding to the target parent scene. In some embodiments, the at least one scene includes a parent scene and at least one sub-scene, and at this time, the scene data information corresponding to the parent scene further includes scene identification information (e.g., uuid) or scene name information (e.g., name) of the at least one sub-scene. In some embodiments, the parent and child scenes are organized in a parent-child hierarchy, which is a scene hierarchy defined directly using the hierarchy of json, and a parent scene is understood to be the root node of a child scene, which may also be the parent scene, and thus may also contain at least one child scene. In some embodiments, the scene data information of the sub-scene includes, but is not limited to, one or more of uuid (globally unique number), name, transform (three-dimensional position information), rotation (rotation angle), scale (zoom), follow strategy, faceCamera (whether facing the camera), hidden (whether hidden), visual range, hrsObjectids (uuid array of objects), subsVirtualScens (sub-scene array), and the like. In some embodiments, if it is identified that there is a target anchor point in the at least one anchor point in the real environment, determining, according to the scene data information, target scene data information in which there is an association relationship with the anchor point data information corresponding to the target anchor point, taking a scene corresponding to the target scene data information as a target parent scene associated with the target anchor point, if the target scene data information includes scene identification information or scene name information of at least one sub-scene, the at least one sub-scene may be taken as a target sub-scene associated with the target anchor point, or if at least one of the at least one sub-scene includes scene identification information or scene name information of at least one second sub-scene, that is, at least one of the at least one sub-scene may also be taken as a parent scene, the at least one second sub-scene may also be taken as a target sub-scene associated with the target anchor point. In some embodiments, the scene data information corresponding to each sub-scene further includes a local spatial conversion attribute of the sub-scene. In some embodiments, any sub-scene or object may define a local space, and the local spatial transformation attribute includes transform (three-dimensional position information), rotation (four element group representing rotation angle) and scale (scale representing three-dimensional scale) attributes, where transform is a ternary group of numbers, and the array [ x, y, z ] may be described as a point (x, y, z) or as a vector (x, y, z). The three scaling coefficients x, y and z are described by an array [ x, y, z ], which is used in the tree structure of the sub-scene, denoted as a point, the reference coordinate system of which is the Cartesian coordinate system defined by its parent node, and if there is no parent node, the global coordinate system is used as the reference coordinate system, scale is a ternary array of numbers. In the tree structure of the sub scene, x, y and z respectively represent Cartesian coordinate systems defined by a reference coordinate system for parent nodes along the unit vectors of the x axis, the y axis and the z axis, if no parent node exists, a global coordinate system is used as the reference coordinate system, rotation is a quaternion array of numbers, and the array [ x, y, z, w ] describes a unit quaternion [ w, (x, y, z) ]. The quaternion is used in a tree structure of a sub-scene, a specific azimuth is determined by the quaternion, a reference coordinate system is a Cartesian coordinate system defined by a father node of the quaternion, and if no father node exists, a global coordinate system is used as the reference coordinate system.
In some embodiments, the object data information includes an object type; wherein the object type includes any one of the following: a 3D model; characters; a picture; audio frequency; video; a web page; PDF documents; an application program; a dot; a wire; a polygon; an ellipse; a free brush. In some embodiments, the object data information further includes an address information of the object resource (i.e. AR material resource) or the object resource, where the address information may be a relative path, or if the encoded binary AR material resource is directly embedded in the augmented reality data, the address information may be a data URI, where a media field of the URI must be matched with the encoded content. In some embodiments, the object data information further includes local spatial transformation attributes. In some embodiments, the local spatial transformation properties include transformation (three-dimensional position information), rotation (four element groups that represent rotation angles), and scale (scale representing three dimensions).
In some embodiments, the object data information further includes presentation attribute information that matches the object type. In some embodiments, the object data information further includes presentation attribute information matched with the object type of the augmented reality presentation information, and the presentation attribute information included in the object data information of different object types may also be different, and when presenting a certain augmented reality presentation information, the augmented reality presentation information is presented according to the presentation attribute information in the object data information corresponding to the augmented reality presentation information. For example, the presentation attribute information of the text type object data information includes, but is not limited to, one or more of text content, width, height, font size, font color, background color, border color, whether to be automatically played, whether to be displayed in a horizontal/vertical alignment mode, whether to be displayed in a play mode, a display and follow object, etc., the presentation attribute information of the picture/web page/PDF document type object data information includes, but is not limited to, one or more of width, height, etc., the presentation attribute information of the audio type object data information includes, but is not limited to, one or more of text content, width, height, volume, play control column, etc., the presentation attribute information of the video type object data information includes, but is not limited to, one or more of width, height, whether to be automatically played, whether to be circularly played, volume, play tool column, display, play mode, etc., the presentation attribute information of the point type object data information includes, but is not limited to one or more of color, size, etc., the presentation attribute information of the border type object data information includes, but is not limited to be color, size, etc., the display attribute information of the border type object data information includes, etc., one or more of border type of the display attribute information includes, but is not limited to be automatically played, whether to be circularly played, whether to be displayed in a circle, volume, whether to be displayed in a play tool column, a play mode, etc., a display mode, etc., the presentation attribute information includes one or more of the display is not is displayed, or not limited to be displayed, and the display, etc., the presentation attribute information includes one or more item is not, one or more of the freebrush's content data, etc.
In some embodiments, the augmented reality data further comprises action data information corresponding to at least one action, wherein at least one object data information and/or at least one scene data information is associated with the action data information; wherein the method further comprises at least one of: if the target scene data information corresponding to the target scene is associated with the target action data information, the target scene or at least one piece of augmented reality presentation information contained in the target scene executes the action corresponding to the target action data information; and if the target object data information corresponding to the target augmented reality presentation information in the at least one piece of augmented reality presentation information is associated with the target action data information, enabling the target augmented reality presentation information to execute the action corresponding to the target action data information. In some embodiments, an action may be an action performed by a certain object or, alternatively, an action performed by a certain scene, an action being a direct cause of a change in an object or scene. In some embodiments, the action data information includes, but is not limited to, one or more of uuid (action identification information, globally unique number), name, uuid (action type including simple action, combined action, etc.), duration (action duration, indicating how long to complete the corresponding action), effect (action effect), groups (action group), etc., where the simple action is the smallest unit of action, the combined action is formed by combining a group of actions based on groups (action group), the action effect includes, but is not limited to, one or more of type (type of simple action), transform (three-dimensional position information), rotation (four element group representing rotation angle), scale (scale representing three dimensions), wherein different types (types of simple actions) determine final action effects, types (types of simple actions) include, but are not limited to, animation (playing 3D model self-driven drawing), move (pose change determined by TRS attribute), discrete (disassembling 3D model), compound (combining 3D model), autorotation, revolution, application (outgoing), display (outgoing), play (playing), pause (pause), stop (stop), play pause) and the like, action groups are combinations of simple actions, action groups include uuid of a plurality of simple actions, optionally, the action group further includes a play start time (time of the relative action group), a uuid of the next action, and the like. In some embodiments, there is an association between the target object data information and/or the target scene data information and the target action data information, i.e. the target action data information is performed by the target object data information and/or the target scene data information, which association is embodied in the action identification information (e.g. uuid) of the target object data information and/or the target scene data information comprising the target action data information, thereby associating the target action with the target object and/or the target scene. In some embodiments, if the target scene data information corresponding to the target scene is associated with the target action data information, the action corresponding to the target action data information is performed by the target scene or at least one augmented reality presentation information included in the target scene, and if the target object data information corresponding to the target augmented reality presentation information is associated with the target action data information, the action corresponding to the target action data information is performed by the target augmented reality presentation information.
In some embodiments, the target action data information includes object identification information of the target object data information and/or scene identification information of the target scene data information. In some embodiments, the association is embodied in object identification information (e.g., uuid) including target object data information and/or scene identification information (e.g., uuid) of target scene data information in the target action data information, thereby associating the target action with the target object and/or target scene.
In some embodiments, the object data information further includes specific action information that matches the object type. In some embodiments, there may be differences in particular action information included in the object data information for different object types. For example, the specific action information of the object data information of the 3D model includes, but is not limited to, one or more of animation (play 3D model self-driven drawing), move (pose adjustment), discrete (disassemble 3D model), compound (assemble 3D model), auto (rotation), revolution, application (exit), display (fade), text/picture/web page/PDF document/application/dot/broken line/polygon/ellipse/freebrush type object data information includes, but is not limited to, move (pose adjustment), etc., the specific action information of the audio type object data information includes, but is not limited to, one or more of play (pause), stop (stop), play pause, etc., and the specific action information of the video type object data information includes, but is not limited to, one or more of move (pose adjustment), play (pause), stop (stop), play pause (play pause), etc.
In some embodiments, the augmented reality data further comprises event data information corresponding to at least one event, wherein at least one object data information and/or at least one scene data information is associated with the event data information; wherein the method further comprises at least one of: if the target scene data information corresponding to the target scene is associated with the target event data information, enabling the target scene to perform specific behaviors on the target scene or at least one piece of augmented reality presentation information contained in the target scene when triggering an event corresponding to the target event data information; and if the target object data information corresponding to the target augmented reality presentation information in the at least one piece of augmented reality presentation information is associated with the target event data information, enabling the target augmented reality presentation information to perform specific actions when triggering the event corresponding to the target event data information. In some embodiments, an event is used to define a responsive action item, such as a specific action, when a corresponding event of a specific entity (object or scene) is triggered. In some embodiments, the event data information includes, but is not limited to, one or more of uuid (event identification information, globally unique number), name, uuid (event type), commands (action instruction array), etc. of an entity (object/scene) associated with the event, where the event type includes a load loading event (such as a scene loading), a click clicking event, and in some embodiments, the event type corresponding to the object is a click clicking event, and the event type corresponding to the scene is a load loading event. An action instruction array is a set of action instructions to be executed when an event is triggered, and an action instruction is a set of several attribute sets, where the corresponding attribute includes one or more of, but not limited to, type (action instruction type), actionId (uuid of action), actualneid (uuid of scene to be skipped), objectId (uuid of object to be modified with attribute parameters), scriptId (uuid of executing script), parallel, delay (delay time), and the like, and the action instruction type includes, but not limited to, action (executing specified action), script (executing action inside script), goto (jumping to specified scene), subscnribe (subscribe message), publist (issue message), set (set relevant attribute of some entity), setNext (relevant attribute of modifying some scene or object to the next enumerated value), and the like. In some embodiments, there is an association between the target object data information and/or the target scene data information and the target event data information, i.e. the target event data information is bound to the target object data information and/or the target scene data information, and the target event data information is triggered by the target object data information and/or the target scene data information, wherein the association is embodied in the event identification information (e.g. uuid) of the target object data information and/or the target event data information, or in the object identification information (e.g. uuid) of the target object data information and/or the scene identification information (e.g. uuid) of the target scene data information. In some embodiments, if the target scene data information corresponding to the target scene is associated with the target event data information, the target scene performs a specific behavior when triggering an event corresponding to the target event data information, or if the target object data information corresponding to the target augmented reality presentation information is associated with the target event data information, the target augmented reality presentation information performs a specific behavior when triggering an event corresponding to the target event data information.
In some embodiments, the performing the specific action includes jumping to a specified scene, and the target event data information includes scene identification information of scene data information corresponding to the specified scene. In some embodiments, the action instruction type in the event data information may be goto (jump to a specified scene), at which time, the value of the corresponding virtualSceneId attribute in the action instruction array included in the event data information is scene identification information (e.g., uuid) of the specified scene to which jump is desired, so that the jump to the specified scene is performed after the event is triggered.
In some embodiments, the augmented reality data further includes motion data information corresponding to at least one motion; the specific action comprises executing a specified action, and the target event data information comprises action identification information of action data information corresponding to the specified action. In some embodiments, the action data information includes, but is not limited to, one or more of uuid (action identification information, globally unique number), name, uuid (action type including simple action, combined action, etc.), duration (action duration, indicating how long to complete the corresponding action), effect, groups (action group), etc., and in some embodiments, the action instruction type in the event data information may be action (execute a specified action), where the value of the corresponding actionId attribute in the action instruction array included in the event data information is the action identification information (e.g., uuid) of the specified action to be executed, so that the specified action is executed after the event is triggered.
In some embodiments, the object event data information includes object identification information of the object event data information and/or scene identification information of the object scene data information. In some embodiments, there is an association between the target object data information and/or the target scene data information and the target event data information, wherein the association is embodied in object identification information (e.g., uuid) including the target object data information and/or scene identification information (e.g., uuid) of the target scene data information in the target event data information, thereby associating the target event with the target object and/or the target scene.
In some embodiments, the augmented reality data further includes configuration data information corresponding to at least one presentation configuration item; wherein the configuration data information includes at least one of: material data information; light data information; camera data information; script data information; wherein the presenting at least one augmented reality presentation information contained in the target scene comprises: and according to the configuration data information, presenting at least one piece of augmented reality presentation information contained in the target scene. In some embodiments, the augmented reality data further includes configuration data information corresponding to at least one presentation configuration item, where the presentation configuration item includes, but is not limited to, one or more of light, a camera, a material, a script, and the like, when at least one anchor point in a physical space is identified according to the augmented reality data, in some embodiments, a position of the at least one augmented reality presentation information in a live view picture of a camera shot by the computer device is determined based on object data information corresponding to at least one augmented reality presentation information included in a target scene associated with the anchor point and real-time pose information of the computer device, and in the position, the at least one augmented reality presentation information is presented in superposition according to the configuration data information, and in other embodiments, the at least one augmented reality presentation information is presented according to the configuration data information based on position information of the anchor point, in the position or a vicinity of the position, and the like, so as to better satisfy AR requirements.
In some embodiments, the data format of the augmented reality data is JSON type. In some embodiments, the data format of the augmented reality data is JSON (JavaScript Object Notation, JS object numbered musical notation), that is, a data format in the form of key-value pairs (key-value), including one or more of scene data information, object data information, link data information, action data information, event data information, etc. in the augmented reality data is JSON type, and the plain text JSON file description is compact and easy to parse.
In some embodiments, the augmented reality data points to external binary data. In some embodiments, the augmented reality data points to external binary data to reference AR material resources such as 3D models, images, video, audio, etc., and a separate request needs to be initiated to obtain these binary data at the time of reference.
In some embodiments, the augmented reality data includes embedded encoded binary data in an inline manner. In some embodiments, the augmented reality data includes AR material resources such as encoded 3D models, images, video, audio, etc. embedded by way of inlining (uniform resource identifier (URI) or Internationalized Resource Identifier (IRI)), requiring additional space for encoding and additional processing for decoding. In some embodiments, to avoid such file size and processing overhead, a container format is introduced that allows the augmented reality data to be stored in a single binary file, external resources can still be referenced, and then when the augmented reality data is used, the resources are directly loaded into the corresponding rendering container in the binary file mode, no additional parsing or processing is needed, and the combination of JSON text and binary effectively ensures the richness and integrity of AR scenes, and also reserves the independence of object resources.
Fig. 2 shows a block diagram of a computer device for presenting augmented reality data, the device comprising a one-to-one module 11, a two-to-two module 12, a three-to-three module 13 and a four-to-four module 14, according to one embodiment of the application. A one-to-one module 11, configured to obtain augmented reality data, where the augmented reality data includes anchor point data information corresponding to at least one anchor point, scene data information corresponding to at least one scene, and object data information corresponding to at least one augmented reality presentation information, where an association relationship exists between the anchor point data information and the scene data information, and an inclusion relationship exists between the scene data information and the object data information; a second module 12, configured to identify the at least one anchor point in the real environment according to the anchor point data information; a third module 13, configured to determine, if a target anchor point is identified, a target scene associated with the target anchor point according to the scene data information; and a fourth module 14, configured to present at least one augmented reality presentation information contained in the target scene according to the object data information.
And the one-to-one module 11 is configured to obtain augmented reality data, where the augmented reality data includes anchor point data information corresponding to at least one anchor point, scene data information corresponding to at least one scene, and object data information corresponding to at least one augmented reality presentation information, where an association relationship exists between the anchor point data information and the scene data information, and an inclusion relationship exists between the scene data information and the object data information. In some embodiments, the augmented reality data is a data format generated based on predetermined AR description criteria including, but not limited to: (1) Describing the manner in which various digital objects exist in space (including, but not limited to, one or more of position, angle, size, follow, orientation, follow strategy, whether hidden, etc.); (2) Defining one or more of presentation properties, events, actions of the digital object, such as actions of whether video is played circularly, ioT data refresh frequency, rotation/disassembly/composition of the 3D model, etc.; (3) Defining relationships between digital objects, such as label follow-up, event association; (4) Defining interaction logic including human interaction with the digital object, attribute modification of the digital object, event triggering, and the like; (5) One or more of the relationships of the anchor point and the virtual scene are described. The standard is not only a file format, but also a delivery format of data content in API call, and can provide efficient, extensible and interoperable formats for content transmission and loading required by AR, so that differences among different rendering engines at different ends are bridged, effective utilization of resources is facilitated, and repeated development is avoided. In some embodiments, the augmented reality data is used to describe how to augment the rich interactable AR material in reality, and can be broken down into the following three points: how to add AR material using AR means; what AR materials are, how to present; how the AR materials are related to each other and interact with reality. In some embodiments, an AR application may be expressed by the augmented reality data. In some embodiments, the augmented reality data is purely a data content in a predetermined format, not mandatory for any running environment, which makes it available for any application, including using any rendering technology.
In some embodiments, an anchor (AR anchor) is used to express an association between a physical space and a digital space, and the anchor data information includes, but is not limited to, one or more of uuid (anchor identification information, globally unique number), name (name), type (different anchor types correspond to different recognition algorithms), url (resource address of anchor), snapshotImage (guide map address of anchor), width (width of anchor corresponds to physical space), height (height of anchor corresponds to physical space), attributes (attribute of anchor, different anchor types may correspond to different attributes), etc., those skilled in the art should understand that the anchor data information may include only one item of uuid, name, type, url, snapshotImage, width, height, attributes, or may include a combination of items of uuid, name, type, url, snapshotImage, width, height, attributes, and is not limited thereto.
In some embodiments, a scene (virtual scene) is an entity containing AR material objects to be rendered, i.e. an object set of AR material to be rendered, and the scene data information includes, but is not limited to, one or more of uuid (scene identification information, globally unique number), name (name), hrsObjectIds (uuid array of objects), translation (three-dimensional position information), rotation (rotation angle), scale (scaling), follow strategy, face camera (whether facing the camera), hidden (whether hidden), visual range (visual range), subscreen scene (set of sub-scene arrays representing sub-scenes), etc., where the follow strategy determines how an entity (anchor point, scene, object, action, event, etc. in augmented reality data may be called an entity) is displayed in a spatial position, and the follow strategy includes, but is not limited to, a follow strategy type, a uv of an entity, an offset of a position of a screen, an alignment of a screen level, a screen level alignment, or at least one or more of the following strategy types, and the like: a follow space (position relative to the space coordinate system), a follow screen (entity always displayed on screen), a follow camera (position relative to the camera can be represented by a transformation), a follow object (position relative to a certain object can be represented by a transformation), a follow link/anchor point (position relative to a certain anchor point can be represented by translation, rotation and scale); when the faceCamera is true, the entity adjusts the angle of the entity along with the camera so as to keep the front face always facing the camera; when hidden is true, the entity does not render; visual range indicates the distance from which the entity can be seen, i.e., the camera is less than visual range from the entity; the sub-scenes constitute a subset of the AR material objects to be rendered, one sub-scene only being present in one scene. In some embodiments, the scene is triggered by an AR anchor, there is an association between anchor data information and scene data information, one anchor data information may have an association with at least one scene data information, one scene data information may also have an association with at least one anchor data information, i.e. one anchor may establish an association with at least one scene, one scene may also establish an association with at least one anchor, the association is embodied in the augmented reality data in the anchor data information, the scene identification information (e.g. uuid) of the anchor data information associated therewith is included in the anchor data information, or may also be the anchor identification information (e.g. uuid) of the anchor data information associated therewith is included in the scene data information.
In some embodiments, the types of augmented reality presentation information (i.e., AR material objects) include, but are not limited to, one or more of 3D models, text, pictures, audio, video, web pages, PDF documents, applications, points, lines, polygons, ellipses, freebrushes, and the like. In some embodiments, the object data information has an inclusion relationship with the scene data information, and one scene data information may include at least one object data information, that is, one scene data information may include at least one augmented reality presentation information, that is, the scene data information may include some or all of the object data information, such as object identification information. In some embodiments, the augmented reality presentation information is a minimum unit to be rendered in the scene, the object data information defines a specific function and a single entity of its position angle scaling, and the object data information includes, but is not limited to, one or more of uuid (object identification information, globally unique number), name (name), translation (three-dimensional position information), rotation (rotation angle), scale (scaling), follow strategy, faceCamera (whether facing the camera), hidden (whether hidden), type (type of object), uri (resource address of object), visual range (visual distance), attributes (attribute set of object), etc., wherein the type of object determines the rendering effect, presentation attribute, specific action of the object.
And a second module 12, configured to identify the at least one anchor point in the real environment according to the anchor point data information. In some embodiments, the anchor data information includes at least one anchor resource corresponding to the anchor or address information of the anchor resource, through which the corresponding anchor resource (such as a two-dimensional code, an identification map, a real environment, a feature point, a point cloud, location information, wireless signal information, etc. in a physical space) can be obtained. In some embodiments, anchor point identification is performed on a real environment (e.g., a camera picture) according to anchor point resources corresponding to at least one anchor point and/or other information in anchor point data information, and whether the at least one anchor point exists in the real environment is identified. In some embodiments, if the number of the anchors stored in the database is greater than or equal to the predetermined number threshold, the anchors stored in the database may be divided into a plurality of groups, for example, groups are performed according to scenes, projects, positions, and the like, each group includes at least one anchor, when the anchors are identified, the corresponding group may be selected/identified, and then the anchors are identified for the real environment according to the groups, so as to improve the efficiency of identifying the anchors. For example, the database stores the anchors created at two positions (such as the factory a and the factory B), the anchors stored in the database can be divided into two groups according to the position information, when the user performs anchor identification at the factory a, the user only needs to search and identify the information scanned by the user on the real environment (for example, the camera picture) and the anchor set corresponding to the factory a stored in the database, and the search and identification of all anchors created by the two factories stored in the database are not needed, so that the identification efficiency is improved.
And the three modules 13 are used for determining a target scene associated with the target anchor point according to the scene data information if the target anchor point is identified. In some embodiments, if it is identified that a target anchor point in the at least one anchor point exists in the real environment, determining target scene data information with an association relation with anchor point data information corresponding to the target anchor point according to the scene data information, taking a scene corresponding to the target scene data information as a target scene associated with the target anchor point, wherein if anchor point identification information of the anchor point data information is included in the scene data information, matching the at least one scene data information corresponding to the at least one scene according to the anchor point identification information of the target anchor point to obtain the target scene data information containing the anchor point identification information, taking the scene corresponding to the target scene data information as a target scene associated with the target anchor point, or if the anchor point data information includes the scene identification information of the scene data information, determining the target scene data information containing the scene identification information in the at least one scene data information corresponding to the scene according to the scene identification information in the anchor point data information corresponding to the target anchor point data information, and taking the scene corresponding to the target scene data information as the target scene associated with the target anchor point.
And a fourth module 14, configured to present at least one augmented reality presentation information contained in the target scene according to the object data information. In some embodiments, the scene data information corresponding to a scene may include object identification information of object data information corresponding to at least one piece of augmented reality presentation information included in the scene, and at least one piece of object data information including the at least one piece of object identification information is determined from the object data information corresponding to the at least one piece of augmented reality presentation information according to the at least one piece of object identification information included in the scene data information corresponding to the target scene. In some embodiments, the object data information includes an object resource (i.e., an AR material resource) or address information of the object resource, through which the corresponding object resource can be acquired. In some embodiments, a position of at least one augmented reality presentation information in a camera real scene shot by a computer device is determined according to object data information corresponding to at least one augmented reality presentation information contained in a target scene associated with the target anchor point and real-time pose information of the computer device, and the at least one augmented reality presentation information is presented in a superposition manner at the position, in other embodiments, the at least one augmented reality presentation information is presented at the position or a nearby area of the position based on position information of the target anchor point (such as a position of the target anchor point identified in a real environment). In some embodiments, the scene data information corresponding to the target scene includes local spatial conversion attribute (for example, conversion (three-dimensional position information) attribute, rotation (four element group representing rotation angle) attribute, scale (scale representing three-dimensional) attribute) of the target scene, according to which local spatial conversion of the target scene is required, and the at least one augmented reality presentation information is presented in the converted target scene. In some embodiments, the object data information corresponding to each piece of augmented reality presentation information includes a local spatial conversion attribute of the object, and local spatial conversion needs to be performed on each object according to the local spatial conversion attribute, and the at least one piece of augmented reality presentation information after conversion is presented in the target scene.
In some embodiments, the anchor data information includes an anchor type; wherein the anchor point type includes any one of the following: a picture; picture feature points; a point cloud; a point cloud map; a two-dimensional code; a cylinder; a cube; a geographic location; a face; a bone; a wireless signal. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the augmented reality data further includes at least one link data information describing an association between the anchor data information and the scene data information; wherein the determining the target scene associated with the target anchor point comprises: and determining a target scene associated with the target anchor point according to the at least one link data information. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, each link data information includes anchor point identification information of anchor point data information corresponding to an anchor point and scene identification information of scene data information corresponding to a scene associated with the anchor point; wherein the determining, according to the at least one link data information, a target scene associated with the target anchor point includes: determining target link data information corresponding to the target anchor point from the at least one link data information; and determining a target scene associated with the target anchor point according to the target link data information. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, each link data information further includes a link type, where the link type includes any one of a scene type and a tracking type; wherein the determining, according to the target link data information, a target scene associated with the target anchor point includes: and determining a target scene associated with the target anchor point and presentation attribute information corresponding to the target scene according to the target link data information. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, each link data information further includes a pose relationship between the anchor point and a scene associated with the anchor point; wherein the determining, according to the target link data information, a target scene associated with the target anchor point includes: and determining a target scene associated with the target anchor point and presentation position information corresponding to the target scene according to the target link data information. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the scene data information corresponding to each scene includes object identification information of object data information included in the scene. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the at least one scene includes at least one parent scene and at least one sub-scene, and the scene data information corresponding to each parent scene further includes scene identification information or scene name information of one or more sub-scenes included in the parent scene; the target scene comprises a target father scene and at least one target son scene; wherein the determining, according to the scene data information, the target scene associated with the target anchor point includes: determining a target parent scene associated with the target anchor point according to the scene data information; and determining at least one target sub-scene corresponding to the target parent scene according to the scene data information corresponding to the target parent scene. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the object data information includes an object type; wherein the object type includes any one of the following: a 3D model; characters; a picture; audio frequency; video; a web page; PDF documents; an application program; a dot; a wire; a polygon; an ellipse; a free brush. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the object data information further includes presentation attribute information that matches the object type. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the augmented reality data further comprises action data information corresponding to at least one action, wherein at least one object data information and/or at least one scene data information is associated with the action data information; wherein the method further comprises at least one of: if the target scene data information corresponding to the target scene is associated with the target action data information, the target scene or at least one piece of augmented reality presentation information contained in the target scene executes the action corresponding to the target action data information; and if the target object data information corresponding to the target augmented reality presentation information in the at least one piece of augmented reality presentation information is associated with the target action data information, enabling the target augmented reality presentation information to execute the action corresponding to the target action data information. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the target action data information includes object identification information of the target object data information and/or scene identification information of the target scene data information. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the object data information further includes specific action information that matches the object type. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the augmented reality data further comprises event method data information corresponding to at least one event, wherein at least one object data information and/or at least one scene data information is associated with the event data information; wherein the method further comprises at least one of: if the target scene data information corresponding to the target scene is associated with the target event data information, enabling the target scene to perform specific behaviors on the target scene or at least one piece of augmented reality presentation information contained in the target scene when triggering an event corresponding to the target event data information; and if the target object data information corresponding to the target augmented reality presentation information in the at least one piece of augmented reality presentation information is associated with the target event data information, enabling the target augmented reality presentation information to perform specific actions when triggering the event corresponding to the target event data information. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the performing the specific action includes jumping to a specified scene, and the target event data information includes scene identification information of scene data information corresponding to the specified scene. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the augmented reality data further includes motion data information corresponding to at least one motion; the specific action comprises executing a specified action, and the target event data information comprises action identification information of action data information corresponding to the specified action. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the object event data information includes object identification information of the object event data information and/or scene identification information of the object scene data information. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the augmented reality data further includes configuration data information corresponding to at least one presentation configuration item; wherein the configuration data information includes at least one of: material data information; light data information; camera data information; script data information; wherein the presenting the one or more augmented reality presentation information contained in the target scene includes: and according to the configuration data information, presenting at least one piece of augmented reality presentation information contained in the target scene. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the data format of the augmented reality data is JSON type. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the augmented reality data points to external binary data. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the augmented reality data includes embedded encoded binary data in an inline manner. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In addition to the methods and apparatus described in the above embodiments, the present application also provides a computer-readable storage medium storing computer code which, when executed, performs a method as described in any one of the preceding claims.
The application also provides a computer program product which, when executed by a computer device, performs a method as claimed in any preceding claim.
The present application also provides a computer device comprising:
One or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 3 illustrates an exemplary system that may be used to implement various embodiments described herein;
in some embodiments, as shown in fig. 3, system 300 can function as any of the devices of the various described embodiments. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement the modules to perform the actions described in the present application.
For one embodiment, the system control module 310 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 305 and/or any suitable device or component in communication with the system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
The system memory 315 may be used, for example, to load and store data and/or instructions for the system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as, for example, a suitable DRAM. In some embodiments, the system memory 315 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or which may be accessed by the device without being part of the device. For example, NVM/storage 320 may be accessed over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. The system 300 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic of one or more controllers (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic of one or more controllers of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die as logic of one or more controllers of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic of one or more controllers of the system control module 310 to form a system on chip (SoC).
In various embodiments, the system 300 may be, but is not limited to being: a server, workstation, desktop computing device, or mobile computing device (e.g., laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, keyboards, liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application Specific Integrated Circuits (ASICs), and speakers.
In addition to the methods and apparatus described in the above embodiments, the present application also provides a computer-readable storage medium storing computer code which, when executed, performs a method as described in any one of the preceding claims.
The application also provides a computer program product which, when executed by a computer device, performs a method as claimed in any preceding claim.
The present application also provides a computer device comprising:
one or more processors;
A memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, e.g., using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present application may be executed by a processor to perform the steps or functions described above. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Furthermore, portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application by way of operation of the computer. Those skilled in the art will appreciate that the form of computer program instructions present in a computer readable medium includes, but is not limited to, source files, executable files, installation package files, etc., and accordingly, the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Communication media includes media whereby a communication signal containing, for example, computer readable instructions, data structures, program modules, or other data, is transferred from one system to another. Communication media may include conductive transmission media such as electrical cables and wires (e.g., optical fibers, coaxial, etc.) and wireless (non-conductive transmission) media capable of transmitting energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied as a modulated data signal, for example, in a wireless medium, such as a carrier wave or similar mechanism, such as that embodied as part of spread spectrum technology. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and nonvolatile memory such as flash memory, various read only memory (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed computer-readable information/data that can be stored for use by a computer system.
An embodiment according to the application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to operate a method and/or a solution according to the embodiments of the application as described above.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Claims (21)
1. A method for presenting augmented reality data, wherein the method comprises:
obtaining augmented reality data, wherein the augmented reality data comprises anchor point data information corresponding to at least one anchor point, scene data information corresponding to at least one scene and object data information corresponding to at least one augmented reality presentation information, wherein an association relationship exists between the anchor point data information and the scene data information, and an inclusion relationship exists between the scene data information and the object data information;
identifying the at least one anchor point in the real environment according to the anchor point data information;
if a target anchor point is identified, determining a target scene associated with the target anchor point according to the scene data information;
and according to the object data information, presenting at least one piece of augmented reality presentation information contained in the target scene.
2. The method of claim 1, wherein the anchor data information includes an anchor type;
wherein the anchor point type includes any one of the following:
a picture;
picture feature points;
a point cloud;
a point cloud map;
a two-dimensional code;
a cylinder;
a cube;
A geographic location;
a face;
a bone;
a wireless signal.
3. The method of claim 1, wherein the augmented reality data further comprises at least one link data information describing an association between the anchor data information and the scene data information;
wherein the determining the target scene associated with the target anchor point comprises:
and determining a target scene associated with the target anchor point according to the at least one link data information.
4. The method of claim 3, wherein each link data information includes anchor point identification information of anchor point data information corresponding to an anchor point and scene identification information of scene data information corresponding to a scene associated with the anchor point;
wherein the determining, according to the at least one link data information, a target scene associated with the target anchor point includes:
determining target link data information corresponding to the target anchor point from the at least one link data information;
and determining a target scene associated with the target anchor point according to the target link data information.
5. The method of claim 4, wherein each link data information further comprises a link type, and the link type comprises any one of a scene type and a tracking type;
Wherein the determining, according to the target link data information, a target scene associated with the target anchor point includes:
and determining a target scene associated with the target anchor point and presentation attribute information corresponding to the target scene according to the target link data information.
6. The method of claim 4, wherein each link data information further includes a pose relationship between the anchor point and a scene with which the anchor point is associated;
wherein the determining, according to the target link data information, a target scene associated with the target anchor point includes:
and determining a target scene associated with the target anchor point and presentation position information corresponding to the target scene according to the target link data information.
7. The method according to claim 1, wherein the scene data information corresponding to each scene includes object identification information of object data information included in the scene.
8. The method of claim 7, wherein the at least one scene includes at least one parent scene and at least one sub-scene, and the scene data information corresponding to each parent scene further includes scene identification information or scene name information of at least one sub-scene included in the parent scene;
The target scene comprises a target father scene and at least one target son scene;
wherein the determining, according to the scene data information, the target scene associated with the target anchor point includes:
determining a target parent scene associated with the target anchor point according to the scene data information;
and determining at least one target sub-scene corresponding to the target parent scene according to the scene data information corresponding to the target parent scene.
9. The method of claim 1, wherein the object data information includes an object type therein;
wherein the object type includes any one of the following:
a 3D model;
characters;
a picture;
audio frequency;
video;
a web page;
PDF documents;
an application program;
a dot;
a wire;
a polygon;
an ellipse;
a free brush.
10. The method of claim 9, wherein the object data information further comprises presentation attribute information that matches the object type.
11. The method of claim 1, wherein the augmented reality data further comprises action data information corresponding to at least one action, wherein at least one object data information and/or at least one scene data information is associated with the action data information;
Wherein the method further comprises at least one of:
if the target scene data information corresponding to the target scene is associated with the target action data information, the target scene or at least one piece of augmented reality presentation information contained in the target scene executes the action corresponding to the target action data information;
and if the target object data information corresponding to the target augmented reality presentation information in the at least one piece of augmented reality presentation information is associated with the target action data information, enabling the target augmented reality presentation information to execute the action corresponding to the target action data information.
12. The method according to claim 11, wherein the target action data information comprises object identification information of the target object data information and/or scene identification information of the target scene data information.
13. The method of claim 1, wherein the augmented reality data further comprises event data information corresponding to at least one event, wherein at least one object data information and/or at least one scene data information is associated with the event data information;
wherein the method further comprises at least one of:
If the target scene data information corresponding to the target scene is associated with the target event data information, enabling the target scene to perform specific behaviors on the target scene or at least one piece of augmented reality presentation information contained in the target scene when triggering an event corresponding to the target event data information;
and if the target object data information corresponding to the target augmented reality presentation information in the at least one piece of augmented reality presentation information is associated with the target event data information, enabling the target augmented reality presentation information to perform specific actions when triggering the event corresponding to the target event data information.
14. The method of claim 13, wherein the performing the specific action includes jumping to a specified scene, and wherein the target event data information includes scene identification information of scene data information corresponding to the specified scene.
15. The method of claim 13, wherein the augmented reality data further comprises motion data information corresponding to at least one motion;
the specific action comprises executing a specified action, and the target event data information comprises action identification information of action data information corresponding to the specified action.
16. The method according to claim 13, wherein the target event data information comprises object identification information of the target object data information and/or scene identification information of the target scene data information.
17. The method of claim 1, wherein the augmented reality data further comprises configuration data information corresponding to at least one presentation configuration item;
wherein the configuration data information includes at least one of:
material data information;
light data information;
camera data information;
script data information;
wherein the presenting at least one augmented reality presentation information contained in the target scene comprises:
and according to the configuration data information, presenting at least one piece of augmented reality presentation information contained in the target scene.
18. The method of claim 1, wherein the data format of the augmented reality data is JSON type.
19. A computer device for presenting augmented reality data, comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to implement the steps of the method of any one of claims 1 to 18.
20. A computer readable storage medium having stored thereon a computer program/instruction which when executed by a processor performs the steps of the method according to any of claims 1 to 18.
21. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method according to any one of claims 1 to 18.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310671831.0A CN116645493A (en) | 2023-06-07 | 2023-06-07 | Method, device and medium for presenting augmented reality data |
| PCT/CN2023/120575 WO2024250492A1 (en) | 2023-06-07 | 2023-09-22 | Method for presenting augmented reality data, device and medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310671831.0A CN116645493A (en) | 2023-06-07 | 2023-06-07 | Method, device and medium for presenting augmented reality data |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116645493A true CN116645493A (en) | 2023-08-25 |
Family
ID=87639797
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310671831.0A Pending CN116645493A (en) | 2023-06-07 | 2023-06-07 | Method, device and medium for presenting augmented reality data |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN116645493A (en) |
| WO (1) | WO2024250492A1 (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2024250492A1 (en) * | 2023-06-07 | 2024-12-12 | 亮风台(上海)信息科技有限公司 | Method for presenting augmented reality data, device and medium |
| CN119607547A (en) * | 2024-12-20 | 2025-03-14 | 福建省软众数字科技有限公司 | A real-time rendering method for augmented reality games |
| WO2025218065A1 (en) * | 2024-04-19 | 2025-10-23 | 亮风台(上海)信息科技有限公司 | Method and device for managing augmented reality space, and medium |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2013157890A1 (en) * | 2012-04-20 | 2013-10-24 | Samsung Electronics Co., Ltd. | Method and apparatus of processing data to support augmented reality |
| CN111445583B (en) * | 2020-03-18 | 2023-08-01 | Oppo广东移动通信有限公司 | Augmented reality processing method and device, storage medium and electronic equipment |
| CN115878858A (en) * | 2022-11-28 | 2023-03-31 | 北京小米移动软件有限公司 | Scenario execution method, device and electronic equipment |
| CN116645493A (en) * | 2023-06-07 | 2023-08-25 | 亮风台(上海)信息科技有限公司 | Method, device and medium for presenting augmented reality data |
-
2023
- 2023-06-07 CN CN202310671831.0A patent/CN116645493A/en active Pending
- 2023-09-22 WO PCT/CN2023/120575 patent/WO2024250492A1/en not_active Ceased
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2024250492A1 (en) * | 2023-06-07 | 2024-12-12 | 亮风台(上海)信息科技有限公司 | Method for presenting augmented reality data, device and medium |
| WO2025218065A1 (en) * | 2024-04-19 | 2025-10-23 | 亮风台(上海)信息科技有限公司 | Method and device for managing augmented reality space, and medium |
| CN119607547A (en) * | 2024-12-20 | 2025-03-14 | 福建省软众数字科技有限公司 | A real-time rendering method for augmented reality games |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024250492A1 (en) | 2024-12-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP4394554A1 (en) | Method for determining and presenting target mark information and apparatus | |
| CN116645493A (en) | Method, device and medium for presenting augmented reality data | |
| US20200364937A1 (en) | System-adaptive augmented reality | |
| US10789770B1 (en) | Displaying rich text on 3D models | |
| CN110502298B (en) | Method and equipment for providing update reminding information of electronic book | |
| CN116740314A (en) | A method, device and medium for generating augmented reality data | |
| CN110519250B (en) | Method and equipment for providing information flow | |
| CN112799733B (en) | A method and device for presenting application pages | |
| CN110769300B (en) | Method and equipment for presenting horizontal screen video in information stream | |
| CN110727825A (en) | Animation playing control method, device, server and storage medium | |
| CN114297506B (en) | A method and device for obtaining recommended image information of a target book | |
| CN113490063B (en) | Method, device, medium and program product for live interaction | |
| CN107221346B (en) | It is a kind of for determine AR video identification picture method and apparatus | |
| CN105630792A (en) | Information display method and device as well as information push method and device | |
| Zorrilla et al. | HTML5-based system for interoperable 3D digital home applications | |
| CN111265875B (en) | Method and equipment for displaying game role equipment | |
| CN117389438A (en) | Page display method, device and electronic equipment | |
| CN113965665B (en) | A method and device for determining a virtual live broadcast image | |
| CN111078654A (en) | A method and device for sharing information | |
| CN105760420A (en) | Method and device for achieving interaction with content of multimedia file | |
| CN111666195B (en) | Method and device for providing video information or image information | |
| CN116684540A (en) | Method, device and medium for presenting augmented reality data | |
| CN116664806A (en) | Method, device and medium for presenting augmented reality data | |
| CN109636922B (en) | Method and device for presenting augmented reality content | |
| CN114866801B (en) | Video data processing method, device, equipment and computer readable storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |