CN117876639B - Label rendering method, device, equipment and readable storage medium - Google Patents
Label rendering method, device, equipment and readable storage medium Download PDFInfo
- Publication number
- CN117876639B CN117876639B CN202410073331.1A CN202410073331A CN117876639B CN 117876639 B CN117876639 B CN 117876639B CN 202410073331 A CN202410073331 A CN 202410073331A CN 117876639 B CN117876639 B CN 117876639B
- Authority
- CN
- China
- Prior art keywords
- label
- skin
- tag
- rendering
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating three-dimensional [3D] models or images for computer graphics
- G06T19/20—Editing of three-dimensional [3D] images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—Three-dimensional [3D] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/004—Annotating, labelling
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application provides a tag rendering method, a device, equipment and a readable storage medium, wherein after a terminal equipment identifies a rendering instruction for requesting a rendering target interface, a first request is sent to a server to request a tag set and the skin ID of each tag in the tag set, and for each 3D tag in the tag set, the skin configuration is obtained according to the skin ID of the 3D tag, and the skin configuration is processed according to the size of the 3D tag and a Sudoku rendering mode to render the 3D tag. By adopting the scheme, the terminal equipment processes the skin configuration according to the size of the 3D label and the style of nine-grid rendering so as to render the 3D label, and the visual effect of the digital world is greatly improved while the 3D label rendering is realized. Moreover, because different skin IDs correspond to different patterns, a plurality of 3D labels with different sizes can be rendered based on the same pattern, the 3D labels do not need to be designed for each size, and the design cost is greatly saved.
Description
Technical Field
The embodiment of the application relates to the technical field of digital twinning, in particular to a method, a device, equipment and a readable storage medium for rendering labels.
Background
With the rapid development of internet technology, digital twinning (DIGITAL TWIN) is increasing in heat. Based on digital twinning techniques, physical entities in the real world can be mapped into the digital world, with a virtual representation of a physical entity in the digital world being referred to as a twins.
In the digital world, there are a large number of labels that play a role in labeling, such as 2D labels or 3D labels, etc.
However, the current tag rendering method only supports the rendering of 2D tags, and does not support the rendering of 3D tags.
Disclosure of Invention
The embodiment of the application provides a label rendering method, a device, equipment and a readable storage medium, which only need to design a small number of patterns, render a 3D label on an interface of a digital world based on a Sudoku rendering mode and the small number of patterns, have low design cost, and greatly improve the visual effect of the digital world while realizing the 3D label rendering.
In a first aspect, an embodiment of the present application provides a tag rendering method, applied to a terminal device, where the method includes:
Identifying a rendering instruction requesting to render the target interface;
Responding to the rendering instruction, sending a first request to a server to request the skin ID of each label in a label set, wherein each label in the label set is a label to be displayed on the target interface, different skin IDs correspond to different patterns, and the label set at least comprises 3D labels;
For each 3D tag in the tag set, acquiring a skin configuration required for rendering the 3D tag according to a skin ID of the 3D tag;
And for each 3D label in the label set, processing the skin configuration according to the size of the label and a nine-grid rendering mode to render the 3D label, wherein the size of a background plate in the processed skin configuration is matched with the size of the label.
In a second aspect, an embodiment of the present application provides a label rendering method, applied to a server, where the method includes:
receiving a first request from terminal equipment, wherein the first request is sent after the terminal equipment identifies a rendering instruction for requesting to render a target interface;
determining a label set and skin IDs of all labels in the label set according to the first request, wherein each label in the label set is a label to be displayed on the target interface, different skin IDs correspond to different patterns, and the label set at least comprises 3D labels;
And sending a first response to the terminal equipment, wherein the first response is used for indicating the tag set and the skin IDs of the tags in the tag set.
In a third aspect, an embodiment of the present application provides a tag rendering apparatus integrated on a terminal device, the tag rendering apparatus including:
the identification module is used for identifying a rendering instruction for requesting to render the target interface;
The receiving and transmitting module is used for responding to the rendering instruction, sending a first request to a server to request the skin ID of each label in a label set, wherein each label in the label set is a label to be displayed on the target interface, different skin IDs correspond to different patterns, and the label set at least comprises 3D labels;
the acquisition module is used for acquiring the skin configuration required by rendering the 3D label according to the skin ID of the 3D label for each 3D label in the label set;
And the processing module is used for processing the skin configuration according to the size of each 3D label in the label set and the style of nine-grid rendering to render the 3D label, and the size of the background plate in the processed skin configuration is matched with the size of the label.
In a fourth aspect, an embodiment of the present application provides a tag rendering apparatus, integrated in a server, including:
The receiving module is used for receiving a first request from the terminal equipment, wherein the first request is sent after the terminal equipment identifies a rendering instruction for requesting to render the target interface;
The processing module is used for determining a label set and skin IDs of all labels in the label set according to the first request, wherein each label in the label set is a label to be displayed on the target interface, different skin IDs correspond to different patterns, and the label set at least comprises 3D labels;
and the sending module is used for sending a first response to the terminal equipment, wherein the first response is used for indicating the tag set and the skin IDs of the tags in the tag set.
In a fifth aspect, embodiments of the present application provide an electronic device comprising a processor and a memory, wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method as described above for the first aspect or the various possible implementations of the first aspect, or adapted to be loaded by the processor and to perform the method as described above for the second aspect or the various possible implementations of the second aspect.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium having stored therein computer instructions which, when executed by a processor, are adapted to carry out the method according to the first aspect or the various possible implementations of the first aspect, or which, when executed by a processor, are adapted to carry out the method according to the second aspect or the various possible implementations of the second aspect.
In a seventh aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements the method as described above for the first aspect or the various possible implementations of the first aspect, or which, when executed by a processor, implements the method as described above for the second aspect or the various possible implementations of the second aspect.
According to the label rendering method, device and equipment and the readable storage medium, after the terminal equipment identifies a rendering instruction for requesting to render a target interface, a first request is sent to a server to request a label set and skin IDs of all labels in the label set, for each 3D label in the label set, skin configuration is obtained according to the skin ID of the 3D label, and the skin configuration is processed according to the size of the 3D label and a Sudoku rendering mode to render the 3D label. By adopting the scheme, the terminal equipment processes the skin configuration according to the size of the 3D label and the style of nine-grid rendering so as to render the 3D label, and the visual effect of the digital world is greatly improved while the 3D label rendering is realized. Moreover, because different skin IDs correspond to different patterns, a plurality of 3D labels with different sizes can be rendered based on the same pattern, the 3D labels do not need to be designed for each size, and the design cost is greatly saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment to which a label rendering method according to an embodiment of the present application is applicable;
FIG. 2 is a flow chart of a rendering method provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a digital world interface in a label rendering method according to an embodiment of the present application;
Fig. 4A to fig. 4E are schematic diagrams of different styles in the label rendering method according to the embodiment of the present application;
Fig. 5 is a schematic diagram of a label structure in a label rendering method according to an embodiment of the present application;
Fig. 6 is a schematic diagram of a nine-square grid in the label rendering method according to the embodiment of the present application;
fig. 7 is a rendering flow chart of a 3D tag in a rendering method according to an embodiment of the present application;
fig. 8 is a schematic diagram of different states of the same label in the label rendering method according to the embodiment of the present application;
Fig. 9 is a schematic diagram of an editing interface in a label rendering method according to an embodiment of the present application;
FIG. 10 is another flowchart of a label rendering method provided by an embodiment of the present application;
fig. 11 is a schematic diagram of a label rendering apparatus according to an embodiment of the present application;
fig. 12 is a schematic diagram of a label rendering apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Currently, digital twinning technology is based on the ability to create a digital world in the virtual world that is 1:1-to-real physical world. Each physical entity in the physical world has a corresponding twin in the digital world. The twin body is, for example, a teaching building, a camera, a square, a playground, etc. Meanwhile, the digital world also comprises a large number of labels playing a role in labeling. For example, three-dimensional geographic information system (Geographic Information System, GIS) maps are a common digital world, where there are a large number of 3D tags and 2D tags.
When a user refreshes a map, it is unavoidable that a large number of 3D tags need to be rendered and displayed. However, there is no rendering method for the 3D label in the industry.
For 2D tags, in the conventional rendering method, a designer is required to design tags of various sizes and shapes in advance. And when in rendering, the terminal equipment renders the label according to the size and the shape of the label and displays the label. For example, one label is named a future digital library and the other label is named teaching building a, but the designer has to design both labels, although the two labels are identical in other properties than different lengths. For another example, the label name of one label is a camera B, the label name of the other label is an alarm C, the two labels are the same except for different character sizes, the character size of the camera B is a small five-number character, the character size of the alarm C is a four-number character, and the label of the camera B is larger in visual effect. Similarly, the designer also has to design two labels.
Obviously, the label rendering mode has high design cost.
To solve the problem of high design cost, a common way is stretching, i.e. labels of different sizes are stretched based on the same sample. For example, two tags, a future digital library and a teaching building a, are stretched based on the same sample. Although the rendering method saves the design cost, the rendering method is easy to deform and has poor overall effect.
According to the label rendering method, device and equipment and the readable storage medium, only a small number of patterns are needed to be designed, and the 3D label is rendered on the interface of the digital world based on the nine-grid rendering mode and the small number of patterns, so that the visual effect of the digital world is greatly improved while the 3D label rendering is realized. In addition, no matter the rendering of the 2D label or the 3D label is performed, the style does not need to be stretched, and deformation can not be generated.
Fig. 1 is a schematic diagram of an implementation environment to which a label rendering method according to an embodiment of the present application is applicable. Referring to fig. 1, the implementation environment includes a rendering end 11, an editing end 12, and a server 13. The rendering end 11 and the editing end 12 respectively establish network connection with the server 13.
The rendering end 11 is, for example, an intelligent interactive tablet, a notebook, a desktop, a mobile phone, a tablet computer, etc., and the embodiment of the application is not limited. The rendering end 11 has a display screen for displaying the rendered target interface. For example, there are two buttons on the interface of the digital world, one button is named technology introduction for entering the technology introduction interface, the other button is named campus guide for entering the campus guide interface, and the technology introduction interface and the campus guide interface are provided with a plurality of labels respectively. The user clicks the "technical introduction" button, the target interface is a technical introduction interface, and the rendering end 11 renders and displays the technical introduction interface, which has at least a twin body and a label of the twin body.
There may be a plurality of labels on one target interface, which labels correspond to only a small number of styles. For example, there are 200 labels on the target interface, but in practice there are only 6 styles. That is, the designer only needs to design 6 styles, and the rendering end 11 can render 200 labels based on the 6 styles. Taking two labels of a future digital library and a teaching building A as examples, the two labels are in the same style, but the visual effects of the two labels are different, namely the labels of the future digital library are longer, and the labels of the teaching building A are shorter.
The editing side 12 is, for example, an intelligent interactive tablet, a notebook, a desktop, a mobile phone, a tablet, etc., and the embodiments of the present application are not limited. The editing side 12 is used to display an editing interface on which a twin and a plurality of styles are displayed. The user operates on the editing interface to configure the style of the tag for the target twins selected by the user. When a style is assigned to one tag, the editing terminal 12 transmits the skin ID and tag identification corresponding to the style of the tag to the server 13, which corresponds to the correspondence between the tag and the skin ID of the tag entered in the server 13.
Each skin ID has a unique identifier such as poi-label-0, poi-label-1, etc. Different skin IDs correspond to different styles, each style has a plurality of different states, such as a default (default) state, a low-level (low) state, a middle-level (middle) state, a high-level (high) state, and a super-high (superhigh) state, and the display effects of the different states are different. Each state has a corresponding rendering resource and rendering rule. The rendering resources comprise a background plate and the like used for rendering the label, and rendering rules are used for indicating how the various components of the label are typeset.
In addition, one tag has a 2D display style or a 3D display style, and thus, one style corresponding to one skin ID has a configuration of a 2D display style and a configuration of a 3D display style in addition to the various states described above. The 2D display style is used for rendering 2D labels of the 2D display style based on the style corresponding to the skin ID, and the 3D display style is used for rendering 3D labels of the 3D display style based on the style corresponding to the skin ID.
The server 13 generates and stores a configuration table in advance from the rendering resources in each state of the skin ID, the configuration of different presentation styles, and the like. The rendering side 11 obtains and caches the configuration table from the server 13.
When the label needs to be displayed on the target interface, the server 13 sends the skin ID of the label to the rendering end 11, and after the rendering end 11 receives the skin ID (default state), the rendering resource and the rendering rule required for rendering the label can be determined according to the skin ID query configuration table. And then, the rendering end 11 processes the rendering resources according to the label type, the size of the label and the style of the nine-square lattice rendering mode, and typesets the processed rendering resources according to the rendering rules, so that the 3D label or the 2D label is rendered.
The server 13 may be an independent physical server, a server cluster formed by a plurality of servers, a cloud server with cloud computing capability, or the like, and the embodiment of the present application is not limited. The server 13 holds a label to be displayed on each target interface, a skin ID of each label, a configuration table, and the like.
It should be noted that, although the rendering end 11 and the editing end 12 are two independent terminal devices in fig. 1, embodiments of the present application are not limited. In other possible implementations, the rendering end 11 and the editing end 12 may be the same terminal device, that is, the user sets the style of the tag for the twin body on the same terminal device, and renders and displays the target interface on the terminal device.
It should be understood that the number of rendering side 11, editing side 12 and servers 13 in fig. 1 is merely illustrative. In practical implementation, any number of rendering ends 11, editing ends 12 and servers 13 are deployed according to practical requirements.
The rendering method according to the embodiment of the present application will be described in detail below based on the implementation environment shown in fig. 1. For example, referring to fig. 2, fig. 2 is a flowchart of a rendering method according to an embodiment of the present application. The embodiment is an explanation from the point of view of interaction between the terminal device and the server, and includes:
201. The terminal device identifies a rendering instruction requesting rendering of the target interface.
The terminal device is, for example, the rendering end. And displaying a digital world interface on a display screen of the terminal equipment, and enabling a user to operate on the digital world interface so as to issue a rendering instruction.
Fig. 3 is a schematic diagram of a digital world interface in a label rendering method according to an embodiment of the present application. Referring to fig. 3, buttons 31 to 37 are provided on the digital world interface. The buttons 31-34 are switching buttons for switching to different digital world interfaces, the button 35 is a downward restore button, and the buttons 36 and 37 are a backward button and a forward button, respectively. The user clicks any one button, thereby transmitting a rendering instruction to the terminal device. The terminal device recognizes the rendering instruction. The rendering instructions are for requesting rendering and displaying of the target interface.
For example, the user clicks the button 32, and the terminal device recognizes that the user desires to switch to the target interface corresponding to the button for navigating the campus, and for example, when the current digital world interface is maximized, the user clicks the button 35, and the terminal device recognizes that the user desires to restore the current digital world interface downward, i.e., the target interface is a scaled-down version of the current digital world interface. At this time, although the label rendering is triggered, since the same batch of labels is displayed on the front and rear digital world interfaces in the downward restore, and the rendering speed is fast, the user is felt as if the label rendering had not occurred.
It should be noted that fig. 3 is actually a 3D schematic view, in fig. 3, there is no popup window, and each tag in fig. 3 is a 3D tag.
202. And the terminal equipment responds to the rendering instruction and sends a first request to the server to request the skin ID of each tag in the tag set.
Accordingly, the server receives the first request. Each label in the label set is a label to be displayed on the target interface, different skin IDs correspond to different patterns, and the label set at least comprises a 3D label.
In the embodiment of the application, the set formed by all the tags on the target interface is called a tag set, and one tag set comprises several, tens, hundreds or even more tags. The tag set at least comprises 3D tags. For example, if there is no popup window on the target interface, the labels in the label set are all 3D labels, if there is a popup window on the target interface and there is a label in the popup window, the label in the popup window is a 2D label, and the label outside the popup window is a 3D label.
203. The server determines a tag set according to the first request, and skin IDs of all tags in the tag set.
And pre-storing a label set corresponding to each target interface on the server. After receiving the first request, the server determines the tag set according to the page identifier carried by the first request and the like. Further, for each tag in the set of tags, the server determines the skin ID of the tag. Different skin IDs correspond to different patterns.
Fig. 4A to fig. 4E are schematic diagrams of different styles in the label rendering method according to the embodiment of the present application, and fig. 5 is a schematic diagram of a label structure in the label rendering method according to the embodiment of the present application. The label shown in fig. 5 takes the form shown in fig. 4E. Referring to fig. 4A to 4E, the visual effects of different styles are different, such as different shapes of the background plate, different label icons, different positions of the label icons, and the like. For example, in FIG. 4A, the label icons are located within the hexagon and the background plate independently, while in FIG. 4B the label icons are located within the background plate.
It should be noted that fig. 4A to 4E only illustrate different patterns in 5, and in practice, the patterns may be more than ten, one hundred, or more. The different types correspond to different skin IDs, for example, the skin IDs of the types shown in FIG. 4A-FIG. 4E are poi-label-0, poi-label-1, poi-label-2, poi-label-3, and poi-label-4, respectively. Based on the label styles, a large number of 2D labels and/or 3D labels can be rendered.
In the tag set described above, there is a high possibility that the patterns of the partial tags are the same, that is, the skin IDs are the same. For example, in fig. 3, the skin IDs of the starlight course and the stadium are the same, and the skin IDs of the remaining tags are the same. After the server determines the tag set, it further determines the skin ID of each tag in the tag set.
204. The server sends a first response to the terminal device.
Accordingly, the terminal device receives the first response from the server. The first response is used to indicate the set of tags and the skin ID of each tag in the set of tags.
205. The terminal device obtains the skin configuration of the tag.
For each tag in the tag set, the terminal device obtains the skin configuration required for rendering the tag according to the skin ID of the tag. The skin configuration typically contains rendering resources including a background board and the like, and rendering rules for indicating the typesetting manner of the respective constituent parts of the label, such as the label icon being located inside the background board or outside the background board and the like.
206. And for each 3D label in the label set, processing the skin configuration according to the size of the 3D label and a nine-grid rendering mode to render the 3D label, wherein the size of a background plate in the processed skin configuration is matched with the size of the 3D label.
Referring to fig. 5, the size of the 3d tag is related to the length of the tag name, the word size of the tag name, and the like. For example, the two 3D tags of the future digital library and teaching building a are different in length and therefore size, but the skin IDs of the two 3D tags are the same, i.e. the styles are the same. For another example, the two 3D tags, starlight court and stadium, differ in word size and therefore size, but the skin ID of the two 3D tags is the same, i.e. the pattern is the same.
In the embodiment of the application, the terminal equipment can obtain the 3D labels with different sizes by using a nine-grid rendering mode according to the sizes of the different 3D labels based on the same style.
The nine-grid rendering mode has the advantages of strong adaptability, image quality maintenance, high flexibility, polymorphism support and the like. The method has strong adaptability, namely, the size of the background plate can be automatically adjusted according to the size of the container by the nine-square grid rendering, so that the nine-square grid rendering is suitable for different screens and equipment and the size of the label name.
Maintaining image quality means ensuring that the background plate does not stretch or distort during zooming by dividing the background plate into nine regions.
The high flexibility means that the nine-grid rendering mode can be applied to rendering of various scenes, including background plates, buttons or panels, and the like, so that the nine-grid rendering mode can adapt to different size requirements.
Supporting the polymorphism means that the label is enabled to support the polymorphism by configuring skin effects in multiple states, detecting the label state in real time and rendering the skin effects corresponding to the different states.
According to the label rendering mode provided by the embodiment of the application, after the terminal equipment identifies the rendering instruction of the request rendering target interface, a first request is sent to the server to request the label set and the skin ID of each label in the label set, for each 3D label in the label set, the skin configuration is obtained according to the skin ID of the 3D label, and the skin configuration is processed according to the size of the 3D label and the nine-grid rendering mode to render the 3D label. By adopting the scheme, the terminal equipment processes the skin configuration according to the size of the 3D label and the style of nine-grid rendering so as to render the 3D label, and the visual effect of the digital world is greatly improved while the 3D label rendering is realized. Moreover, because different skin IDs correspond to different patterns, a plurality of 3D labels with different sizes can be rendered based on the same pattern, the 3D labels do not need to be designed for each size, and the design cost is greatly saved.
Optionally, in the foregoing embodiment, for each 3D tag in the tag set, before the terminal device processes the skin configuration according to the size of the 3D tag and the style of nine-grid rendering to render the 3D tag, first, for each tag in the tag set, the terminal device determines a tag type, where the tag type indicates that the tag is a 2D tag or a 3D tag. And for each tag in the tag set, determining the tag as a 3D tag when the tag is positioned outside a popup window on the target interface, and determining the tag as a 2D tag when the tag is positioned in the popup window.
In the embodiment of the application, the digital world is a world which is created in the virtual world and is 1:1 carved with the real physical world. Thus, the digital world interface is a 3D interface, and the labels on the 3D interface are 3D labels, as shown in fig. 3. But some popup windows and the like are also displayed on the digital world interface, and the tags in the popup windows are 2D tags. When the terminal device requests to render the target interface, the target interface may be a pure 3D interface or a 3D interface including a popup window. When the target interface is a pure 3D interface, each label in the label set is a 3D label, and the nine-grid rendering mode is 3D rendering. When the target interface is a 3D interface comprising a popup window, the tag set comprises a 2D tag positioned in the popup window and a 3D tag positioned outside the popup window. The nine-grid rendering mode of the 2D label is 2D rendering.
After the terminal equipment acquires the label set, for each label, the terminal equipment determines the label type of the label according to the relative position relation between the label and the popup window. When one tag is located in the popup window, the terminal device determines that the tag is a 2D tag, and when one tag is located outside the popup window, the terminal device determines that the tag is a 3D tag.
By adopting the scheme, the terminal equipment determines the label type and the rendering type according to the position of the label relative to the popup window on the target interface so as to render the label, thereby achieving the purpose of improving the label rendering effect.
Optionally, in the foregoing embodiment, for each 2D tag in the tag set, the terminal device converts the skin configuration into a frame syntax according to a size of the 2D tag. And then, the terminal equipment mounts the frame grammar according to the position of the label on the target interface so as to render the 2D label.
Fig. 6 is a schematic diagram of a nine-square grid in the label rendering method according to the embodiment of the present application. Referring to fig. 6, the background plate is rectangular as shown by the thick solid line. Based on a nine-grid rendering mode, the terminal equipment divides the background plate into nine areas, namely four corners, four sides and a central area. The four corners are shown as the upper left corner, the lower left corner, the upper right corner and the lower right corner in the figure, namely, the region 1 to the region 4 in fig. 6. The four corners, also called corner regions (corner regions), are each used to compose the four corners of the final rendered label.
The four sides are shown as left, right, top and bottom regions in the figure, i.e., region 5-region 8 in fig. 6. Four edges, also known as edge regions (edge regions), are repeated, scaled or modified in the final dyed label to match the size of the label.
The center region (middle region) is discarded by default as region 9 in the figure. But it will be used for label display if the keyword fill is set. The central region is telescopic, while the corners and edges are fixed. The terminal device calculates and adjusts the position and the size of each area according to the size of the container and the size of the nine areas, so that the self-adaptive scaling of the background plate is realized. Wherein a container can be understood as the sum of the number of characters in the tag name, etc.
For 2D tags, the terminal device converts the skin configuration into a frame syntax based on a cascading style sheet (CASCADING STYLE SHEETS, CSS) frame picture (Border-image). The frame syntax includes at least one of the following parameters:
A frame picture slice (Border-image-slice) for setting a cutting size of the frame picture;
a frame picture width (Border-image-width) for setting a width of the frame picture;
A Border picture out-of-limit amplitude (Border-image-outset) for indicating an amount by which the Border picture exceeds an area;
frame picture repetition (bober-image-repeat) for setting up, down, left, right four sides of the repeated form;
A bounding box picture path (Border-image-source) indicates whether or not a picture is used.
After the terminal equipment determines the frame grammar, the style corresponding to the skin ID is mounted to the position of the label on the target interface through a script (JavaScript, JS) document object model (Document Object Model, DOM), so that the label is rendered.
By adopting the scheme, the terminal equipment realizes the rendering of the 2D label in a mode of mounting the frame grammar, and the aim of improving the display effect of the 2D label in the digital world interface popup window is fulfilled.
Fig. 7 is a rendering flow chart of a 3D tag in a rendering method according to an embodiment of the present application. The embodiment comprises the following steps:
701. and determining the size of a nine-grid of the background plate in the skin configuration according to the size of the 3D label.
For any one tag in the tag set, when the tag type indicates that the tag is a 3D tag and the rendering type is 3D rendering, the electronic device realizes 3D rendering based on a class of nine Gong Geji and what (NinePalaceGridGeometry) is realized in three. The class implementing nine Gong Geji in three.js inherits from three. Buffergeometry and rewrites the constructors and update methods.
In this step, the terminal device updates the attribute of which object is nine Gong Geji, for example, the size of the stretched area and the image of the nine-square grid, by using an update method, and calculates the left, right, upper and lower boundaries of the nine-square grid, i.e., the nine-square grid size, according to the stretched area and the image size in the parameters of the rendering rule.
702. And determining the UV coordinates of the nine squares according to the nine squares.
The terminal device determines the UV coordinates of the nine-square according to the size of the nine-square and the size of the image, for example. The V coordinate refers to a plane in which all image files are two-dimensional. The horizontal direction is U and the vertical direction is V.
703. And updating the vertex array and the coordinate array according to the size of the nine palace lattice and the UV coordinates.
For example, the terminal device determines the boundary of the nine-square according to the size of the nine-square, and updates the vertex array mVtx and the UV coordinate array nUVs according to the boundary of the nine-square and the UV coordinates.
704. And processing the background plate according to the updated vertex array and the updated coordinate array so that the size of the background plate is self-adaptive to the size of the label.
The terminal device sets the updated vertex array and the updated coordinate array as the attributes of the geometric body respectively by using a set Attribute (set Attribute) mode, namely, the background plate is processed, so that the size of the background plate is adaptive to the size of the label.
705. 3D rendering effects are shown.
The terminal equipment renders the 3D label and displays the 3D label according to the background plate and the like after typesetting the rendering rules in the matching configuration.
By adopting the scheme, the 2D label is rendered in a nine-grid rendering mode, and the aim of improving the display effect of the 3D label in the popup window of the digital world interface is fulfilled.
Optionally, in the foregoing embodiment, before the terminal device sends the first request to the server to request the skin ID of each tag in the tag set in response to the rendering instruction, a second request is further sent to the server to obtain the configuration table. Accordingly, the server receives the second request. And after receiving the second request, the server sends a second response carrying the configuration table to the terminal equipment. And the terminal equipment caches the configuration table after receiving the second response.
Illustratively, the server creates and stores a configuration table in advance that maintains the skin configuration of all styles of skin IDs. The second request is, for example, a hypertext transfer protocol (Hypertext Transfer Protocol, HTTP) request, or the like. When the user roams in the digital world through the terminal equipment, the terminal equipment downloads the configuration table from the server. In this way, when the terminal device requests to render the target interface each time, after obtaining the tag set and the skin ID of each tag in the tag set, the terminal device can determine the skin configuration corresponding to the skin ID by using the skin ID table lookup for each skin ID.
In the embodiment of the present application, the skin configuration is, for example, a JSON object, which contains configuration information of a skin ID. Each skin ID has a unique identifier, such as poi-lable-0, poi-lable-1, etc. When the style corresponding to the skin ID comprises a plurality of states, the skin configuration of the skin ID comprises rendering resources and rendering rules of each state. The structure of the skin configuration is, for example, as follows:
top-level construction, representing the skin configuration below for each skin ID.
Poi-lable-0A skin ID more skin IDs can be added to each POI as required, such as Poi-lable-1, poi-lable-2, etc., the more samples, the more skin IDs.
Asset assets is a URL representing a rendering resource in a different state of skin ID. For example, a sample of skin ID has four states, default (low), low (low), medium (low), high (high), each with rendering resources, which include uniform resource locators (Uniform Resource Locator, URLs) for resources such as background boards (planes), lines, icons (icon), and AR tags (AR Tag).
2D, configuration including 2D presentation style. The configuration contains background plate (plane), line, icon (icon), and AR Tag (AR Tag) elements, each with corresponding properties such as height, width, and position.
3D, configuration including 3D presentation style. The configuration contains background plate (plane), line, icon (icon), and AR Tag (AR Tag) elements, each with corresponding attributes such as size, offset, and center point.
By adopting the scheme, the terminal equipment acquires and caches the configuration table in advance, and when the target interface is rendered, the skin configuration can be determined by looking up the configuration table according to the skin ID, so that the speed is high and the accuracy is high.
Optionally, in the above embodiment, the server further monitors whether a 3D tag switched from the first state to the second state exists in the tag set. And when the server determines that the 3D label switched from the first state to the second state exists in the label set, sending a state change message to the terminal equipment, wherein the state change message indicates the 3D label with the changed state and the second state of the 3D label. After the terminal equipment monitors the state change message, rendering resources corresponding to the second state of the 3D label with the changed state are obtained. And then, re-rendering the 3D label according to the rendering resources corresponding to the second state.
In the embodiment of the application, the label state is a mechanism for distinguishing different states of the label, which is added on the basis of nine-square lattice rendering. When the style corresponding to the skin ID includes a plurality of states, the skin configuration of the skin ID includes rendering resources of each of the various states, for example, the rendering resources correspond to a default (default) state, a low-level (low) state, a middle-level (middle) state, a high-level (high) state, and a super high (high) state, respectively. Each tag has a skin ID, i.e. one tag has a sample corresponding thereto, and tags of the same skin ID correspond to the same sample, based on which a 2D tag or a 3D tag can be rendered. When one tag is a 3D tag, the 3D tag may have various states, namely, the default state, the low state, the middle state, the high state, and the super high state described above. The alarm levels represented by the display effects of the different states are also different. The status of the 3D tag is related to the alarm urgency.
Fig. 8 is a schematic diagram of different states of the same 3D label in the label rendering method according to the embodiment of the present application. Referring to fig. 8, fig. 8 illustrates a 3D tag of a camera, which is a low state, a middle state, a high state, and a super high state in order from left to right.
For example, the teaching building a is provided with a monitoring camera, and when the monitoring camera shoots a fire, the 3D tag of the monitoring camera is switched from the default state to the high state. After the server monitors that the 3D tag is switched from the default state to the high state, the server sends a state change message to the terminal device, such as a real-time communication (INSTANT MESSAGING, IM) message.
After receiving the IM message, the terminal equipment acquires rendering resources corresponding to the second state from the configuration table and re-renders the 3D label according to the skin ID of the label carried by the IM message and the identifier of the second state, so as to change the state of the 3D label into the second state. For example, the terminal device invokes 2D and 3D update (update) methods, passing the second state in the IM message as a parameter to the update method to update the state of the 3D tag.
By adopting the scheme, the label state is increased on the basis of nine-grid rendering, so that the method has more adaptability and flexibility.
Optionally, in the foregoing embodiment, before the terminal device executes the label rendering method according to the embodiment of the present application, an editing interface is further displayed, where the editing interface displays the target interface and multiple styles, and the different styles correspond to different skin IDs. And then, the terminal equipment responds to a first selection operation of a user, identifies a target twin body selected by the user from twin bodies displayed on the target interface, and responds to a second selection operation of the user for selecting a target style in the editing interface, and configures a label of the target style for the target twin body. And finally, the terminal equipment sends a third request to the server, wherein the third request carries the corresponding relation between the label of the twin body on the target interface and the skin ID of the label. And after receiving the third request, the server stores the corresponding relation.
The terminal device also edits each twin on the target interface to configure a tag for each twin before executing the tag rendering method according to the embodiment of the present application. In the editing process, the terminal equipment selects a target twin body on a target interface. And then selecting the labels of the target styles for the target twins in the styles displayed on the editing interface. In addition, the user can set a tag name or the like for the target twin through the editing interface. And then, the terminal equipment configures the tags of the target style for the target twin body according to the tag names input by the user, the target style selected by the user and the like. The target twins are twins outside the target interface popup or twins within the popup. When the target twin body is a twin body in the popup window, the terminal equipment generates a 3D label for the twin body, and when the target twin body is a twin body in the popup window, the terminal equipment generates a 2D label for the twin body.
Fig. 9 is an editing interface schematic diagram in the label rendering method according to the embodiment of the present application. Referring to fig. 9, a user selects a building from a target interface as a target twin, as shown by the black protruding twin. Then, the user selects a style one for the target twin from the plurality of styles, namely, the style one is used as a target style. The digital world interface shown in fig. 9 is a pure 3D interface because of the absence of a popup window. Thus, the terminal device generates a 3D tag for the target twin according to the style and the like.
After the terminal equipment configures the tag for the target twin, the corresponding relation between the tag and the skin ID and the like are sent to the server. In this way, when the terminal device requests to render the target interface, the skin ID of each tag can be quickly determined.
Alternatively, the terminal device may also send the style one, the tag name, etc. to the server, where the server generates a tag for the target twin.
By adopting the scheme, through the editing interface, a user can flexibly configure the style of the tag for each twin body, namely, the skin ID is configured for each twin body, so that the flexibility is high, and the configuration mode is simple.
Optionally, in the above embodiment, the server further generates a configuration table in advance. In the process of generating the configuration table, the server loads rendering resources in various states of each skin ID, and for each state of each skin ID, a background plate and a foot line are generated according to the corresponding rendering resources. And then, the server determines a 2D display style and a 3D display style of the tag according to the background plate and the leg line, and generates the configuration table according to rendering resources, the 2D display style, the 3D display style, the height and width position information of the icon and the height and width position information indicated by AR under different states of each skin ID.
Illustratively, the server loads rendering resources for various states of a sample of skin IDs into the asset (assets) field of the configuration table in terms of the state dimension. And then sequentially generating values of CSS frame pictures of the background plate and the foot line by using a debugger, and configuring the generated values of the frame pictures to a background plate (plane) field and a line (line) field of configuration of a 2D display style in a configuration table, and a background plate (plane) field and a line (line) field of configuration of a 3D display style. Then, the server configures the wide and high position information of the icon (icon) and the AR tag (artag) to a configuration table according to a User Interface (UI) design draft, thereby generating the configuration table.
By adopting the scheme, the server generates the configuration table containing various states of each skin ID in advance, and in the label rendering process, the server sends the configuration table to the terminal equipment, so that the terminal equipment caches the configuration table, and the terminal equipment acquires rendering resources and rendering rules from the configuration table, so that the speed is high and the accuracy is high.
Fig. 10 is another flowchart of a label rendering method according to an embodiment of the present application, where the embodiment includes:
1001. rendering instructions are identified.
1002. In response to the rendering instructions, a first request is sent to the server requesting a skin ID for each tag in the set of tags.
1003. A second request is sent to the server to obtain the configuration table.
The configuration table is used for indicating the skin configuration corresponding to each skin ID, the skin configuration comprises rendering resources and rendering rules, and when the style corresponding to the skin ID comprises multiple states, the skin configuration of the skin ID comprises the rendering resources of each state;
1004. the configuration table is cached.
1005. For each tag in the set of tags, a skin configuration required to render the tag is obtained from the configuration table and the skin ID of the tag.
1006. For each tag in the tag set, the tag type of the tag is determined, when the tag type indicates that the tag is a 2D tag, step 1007 is performed, and when the tag type indicates that the tag is a 3D tag, step 1008 is performed.
1007. And rendering and displaying the 2D label based on a nine-grid rendering mode of the 2D rendering type.
1008. And rendering and displaying the 3D label based on a nine-grid rendering mode of the 3D rendering type.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 11 is a schematic diagram of a tag rendering apparatus 1100 according to an embodiment of the present application, where the tag rendering apparatus 1100 is integrated on a terminal device, and the tag rendering apparatus 1100 includes an identification module 111, a transceiver module 112, an acquisition module 113, and a processing module 114.
An identification module 111, configured to identify a rendering instruction requesting to render the target interface;
The transceiver module 112 is configured to send a first request to a server in response to the rendering instruction to request a skin ID of each tag in a tag set, where each tag in the tag set is a tag to be displayed on the target interface, different skin IDs correspond to different styles, and the tag set at least includes a 3D tag;
an obtaining module 113, configured to obtain, for each 3D tag in the tag set, a skin configuration required for rendering the 3D tag according to a skin ID of the 3D tag;
And the processing module 114 is configured to process the skin configuration according to the size of the tag and the style of rendering the nine-square for each 3D tag in the tag set to render the 3D tag, where the size of the background plate in the processed skin configuration matches the size of the tag.
In a possible implementation, the processing module 114 is configured to determine, for each tag in the tag set, that the tag is a 3D tag when the tag is located outside a popup window on the target interface, and that the tag is a 2D tag when the tag is located inside the popup window.
In a possible implementation manner, the processing module 114 is further configured to, when the tag set further includes 2D tags, convert, for each 2D tag in the tag set, the skin configuration into a frame grammar according to a size of the 2D tag, and mount the frame grammar according to a position of the tag on the target interface, so as to render the 2D tag.
In a possible implementation manner, the processing module 114 is configured to determine a nine-square size of a background plate in the skin configuration according to the size of the 3D label, where the nine-square size is used to indicate a left boundary, a right boundary, an upper boundary and a lower boundary of the nine-square, determine UV coordinates of the nine-square according to the nine-square size, update a vertex array and a coordinate array according to the nine-square size and the UV coordinates, process the background plate according to the updated vertex array and coordinate array so that the size of the background plate is adaptive to the size of the 3D label, and typeset the processed background plate according to a rendering rule in the skin configuration so as to render the 3D label.
In a possible implementation manner, before sending, in response to the rendering instruction, a first request to a server to request a skin ID of each tag in the tag set, the transceiver module 112 is further configured to send, to the server, a second request to obtain a configuration table, where the configuration table is used to indicate a skin configuration corresponding to each skin ID, the skin configuration includes a rendering resource and a rendering rule, the rendering resource includes a background board, and when a pattern corresponding to the skin ID includes multiple states, the skin configuration of the skin ID includes rendering resources of each state;
the processing module 114 is further configured to cache the configuration table.
In a possible implementation manner, the processing module 114 processes the skin configuration for each 3D tag in the tag set according to the size of the 3D tag and the style of nine-grid rendering, so as to render the 3D tag, and is further configured to monitor a state change message, when the state change message indicates that the 3D tag is switched from a first state to a second state, acquire a rendering resource corresponding to the second state, and update the 3D tag according to the rendering resource corresponding to the second state.
In a possible implementation manner, the processing module 114 is further configured to control the terminal device to display an editing interface, where the editing interface displays the target interface and multiple styles, and different styles correspond to different skin IDs, and in response to a first selection operation of a user, identify a target twin selected by the user from twin displayed on the target interface;
The transceiver module 112 is further configured to send a third request to the server, so that the server stores the correspondence between the tag and the skin ID of the tag.
The label rendering device provided by the embodiment of the application can execute the actions of the terminal equipment in the above embodiment, and the implementation principle and the technical effect are similar, and are not repeated here.
Fig. 12 is a schematic diagram of a tag rendering apparatus 1200 according to an embodiment of the present application, where the tag rendering apparatus 1200 is integrated on a server, and the tag rendering apparatus 1200 includes a receiving module 121, a processing module 122, and a transmitting module 123.
A receiving module 121, configured to receive a first request from a terminal device, where the first request is sent after the terminal device identifies a rendering instruction that requests to render a target interface;
The processing module 122 is configured to determine a tag set according to the first request, and skin IDs of each tag in the tag set, where each tag in the tag set is a tag to be displayed on the target interface, different skin IDs correspond to different styles, and the tag set at least includes 3D tags;
And the sending module 123 is configured to send a first response to the terminal device, where the first response is used to indicate the tag set and a skin ID of each tag in the tag set.
In a possible implementation manner, the receiving module 121 is further configured to receive a second request from the terminal device, where the second request is used to request a configuration table, where the configuration table is used to indicate a skin configuration corresponding to each skin ID, where the skin configuration includes rendering resources and rendering rules, and where when the style corresponding to the skin ID includes multiple states, the skin configuration of the skin ID includes rendering resources of each state;
the sending module 123 is further configured to send a second response carrying the configuration table to the terminal device.
In a possible implementation, the processing module 122 is further configured to determine whether a 3D tag that is switched from the first state to the second state exists in the tag set;
The sending module 123 is further configured to send a status change message to the terminal device when a 3D tag whose status changes exists in the tag set, where the status change message is used to indicate that the status of the tag is changed to a second status.
In a possible implementation manner, the processing module 122 is further configured to load rendering resources in various states of each skin ID before the receiving module 121 receives the second request from the terminal device, generate, for each state of each skin ID, a background board and a leg line according to the corresponding rendering resources, determine, according to the background board and the leg line, a 2D display style and a 3D display style of the tag, and generate the configuration table according to the rendering resources, the 2D display style, the 3D display style, the icon's high-width position information, and the AR-indicated high-width position information in different states of each skin ID.
In a possible implementation manner, before the receiving module 121 receives the first request from the terminal device, the receiving module is further configured to receive a third request from the terminal device, where the third request carries a correspondence between a tag of the twin on the target interface and a skin ID of the tag;
The processing module 122 is further configured to save the correspondence.
The label rendering device provided by the embodiment of the application can execute the action of the server in the above embodiment, and the implementation principle and the technical effect are similar, and are not repeated here.
Fig. 13 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, where the electronic device is, for example, the terminal device or the server described above. Referring to fig. 13, an electronic device 1300 according to an embodiment of the present application includes at least one processor 131, at least one network interface 134, a user interface 133, a memory 135, and at least one communication bus 132.
Wherein the communication bus 132 is used to enable connected communication between these components.
The user interface 133 may include a Display (Display) or the like, and optionally, the user interface 133 may also include a standard wired interface, a wireless interface, or the like.
The network interface 134 may include, among other things, a standard wired interface, a wireless interface (e.g., WI-FI interface).
Wherein processor 131 may include one or more processing cores. The processor 131 utilizes various interfaces and lines to connect various portions of the overall electronic device 1300, perform various functions of the electronic device 1300, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 135, and invoking data stored in the memory 135. Alternatively, the processor 131 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 131 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing contents required to be displayed by the display screen, and the modem is used for processing wireless communication. It will be appreciated that the modem may not be integrated into the processor 131 and may be implemented by a single chip.
The Memory 135 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 135 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 135 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 135 may include a stored program area that may store instructions for implementing an operating system, instructions for at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, etc., and a stored data area that may store data related to the various method embodiments described above, etc. The memory 135 may also optionally be at least one storage device located remotely from the aforementioned processor 131. As shown in fig. 13, an operating system, a network communication module, a user interface module, an application program, and the like may be included in the memory 135 as one type of computer storage medium.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.
Claims (13)
1. A label rendering method, characterized by being applied to a terminal device, the method comprising:
Identifying a rendering instruction requesting to render the target interface;
Responding to the rendering instruction, sending a first request to a server to request the skin ID of each label in a label set, wherein each label in the label set is a label to be displayed on the target interface, different skin IDs correspond to different patterns, and the label set at least comprises 3D labels;
For each 3D tag in the tag set, acquiring a skin configuration required for rendering the 3D tag according to a skin ID of the 3D tag;
for each 3D label in the label set, processing the skin configuration according to the size of the label and a nine-grid rendering mode to render the 3D label, wherein the size of a background plate in the processed skin configuration is matched with the size of the label;
The processing the skin configuration according to the size of the 3D tag and the style of nine-grid rendering to render the 3D tag for each 3D tag in the tag set includes:
Determining a nine-square size of a background plate in the skin configuration according to the size of the 3D label, wherein the nine-square size is used for indicating a left boundary, a right boundary, an upper boundary and a lower boundary of the nine-square;
determining UV coordinates of the nine squares according to the nine squares;
Updating a vertex array and a coordinate array according to the size of the nine palace lattice and the UV coordinates;
Processing the background plate according to the updated vertex array and the updated coordinate array, so that the size of the background plate is self-adaptive to the size of the 3D label;
typesetting the processed background plate according to the rendering rules in the skin configuration to render the 3D label.
2. The method of claim 1, wherein for each 3D tag in the set of tags, before processing the skin configuration to render the 3D tag according to the size of the 3D tag and the style of nine-grid rendering, further comprising:
And for each tag in the tag set, determining the tag as a 3D tag when the tag is positioned outside a popup window on the target interface, and determining the tag as a 2D tag when the tag is positioned in the popup window.
3. The method according to claim 1, wherein the method further comprises:
When the tag set further comprises 2D tags, for each 2D tag in the tag set, converting the skin configuration into a frame grammar according to the size of the 2D tag;
and mounting the frame grammar according to the position of the 2D label on the target interface so as to render the label.
4. A method according to any one of claims 1 to 3, wherein before sending a first request to a server to request a skin ID of each tag in a tag set in response to the rendering instruction, further comprising:
Sending a second request to the server to obtain a configuration table, wherein the configuration table is used for indicating skin configuration corresponding to each skin ID, the skin configuration comprises rendering resources and rendering rules, the rendering resources comprise background plates, and when the style corresponding to the skin ID comprises multiple states, the skin configuration of the skin ID comprises rendering resources of each state;
And caching the configuration table.
5. The method of claim 4, wherein for each 3D tag in the tag set, processing the skin configuration according to the size of the 3D tag and a style of nine-grid rendering to render the 3D tag further comprises:
Monitoring a state change message;
When the state change message indicates that the 3D label is switched from a first state to a second state, rendering resources corresponding to the second state are obtained;
and updating the 3D label according to the rendering resource corresponding to the second state.
6. The method according to any one of claim 1 to 3, wherein,
Displaying an editing interface, wherein the editing interface displays the target interface and a plurality of styles, and different styles correspond to different skin IDs;
responding to a first selection operation of a user, and identifying a target twin body selected by the user from twin bodies displayed on the target interface;
Responding to a second selection operation of a user for selecting a target style in the editing interface, and configuring a label of the target style for the target twin;
And sending a third request to the server so that the server stores the corresponding relation between the label and the skin ID of the label.
7. A label rendering method, characterized by being applied to a server, the method comprising:
receiving a first request from terminal equipment, wherein the first request is sent after the terminal equipment identifies a rendering instruction for requesting to render a target interface;
determining a label set and skin IDs of all labels in the label set according to the first request, wherein each label in the label set is a label to be displayed on the target interface, different skin IDs correspond to different patterns, and the label set at least comprises 3D labels;
Sending a first response to the terminal equipment, wherein the first response is used for indicating the tag set and the skin IDs of all tags in the tag set;
Receiving a second request from a terminal device, wherein the second request is used for requesting a configuration table, the configuration table is used for indicating skin configuration corresponding to each skin ID, the skin configuration comprises rendering resources and rendering rules, and when a pattern corresponding to the skin ID comprises multiple states, the skin configuration of the skin ID comprises rendering resources of each state;
sending a second response carrying the configuration table to the terminal equipment;
before receiving the second request from the terminal device, the method further includes:
loading rendering resources under various states of each skin ID;
for each state of each skin ID, generating a background plate and a foot line according to the corresponding rendering resources;
Determining a 2D display style and a 3D display style of the tag according to the background plate and the leg line;
Generating the configuration table according to rendering resources, the 2D display style, the 3D display style, the height and width position information of icons and the height and width position information indicated by AR in different states of each skin ID.
8. The method as recited in claim 7, further comprising:
Determining whether a 3D tag switched from a first state to a second state exists in the tag set;
And when the tag with the changed state exists in the tag set, sending a state change message to the terminal equipment, wherein the state change message is used for indicating the state of the 3D tag to be changed into a second state.
9. The method according to claim 7 or 8, characterized in that before said receiving the first request from the terminal device, further comprises:
Receiving a third request from the terminal equipment, wherein the third request carries the corresponding relation between the label of the twin body on the target interface and the skin ID of the label;
and storing the corresponding relation.
10. A label rendering apparatus, comprising:
the identification module is used for identifying a rendering instruction for requesting to render the target interface;
The receiving and transmitting module is used for responding to the rendering instruction, sending a first request to a server to request the skin ID of each label in a label set, wherein each label in the label set is a label to be displayed on the target interface, different skin IDs correspond to different patterns, and the label set at least comprises 3D labels;
the acquisition module is used for acquiring the skin configuration required by rendering the 3D label according to the skin ID of the 3D label for each 3D label in the label set;
The processing module is used for processing the skin configuration according to the size of each 3D label in the label set and the style of nine-grid rendering to render the 3D label, and the size of a background plate in the processed skin configuration is matched with the size of the label;
The processing module is specifically configured to determine a nine-square size of a background plate in the skin configuration according to the size of the 3D label, wherein the nine-square size is used for indicating a left boundary, a right boundary, an upper boundary and a lower boundary of the nine-square, determine UV coordinates of the nine-square according to the nine-square size, update a vertex array and a coordinate array according to the nine-square size and the UV coordinates, process the background plate according to the updated vertex array and the coordinate array to enable the size of the background plate to be adaptive to the size of the 3D label, and typeset the processed background plate according to a rendering rule in the skin configuration to render the 3D label.
11. A label rendering apparatus, the apparatus being integrated on a server, the apparatus comprising:
The receiving module is used for receiving a first request from the terminal equipment, wherein the first request is sent after the terminal equipment identifies a rendering instruction for requesting to render the target interface;
The processing module is used for determining a label set and skin IDs of all labels in the label set according to the first request, wherein each label in the label set is a label to be displayed on the target interface, different skin IDs correspond to different patterns, and the label set at least comprises 3D labels;
the sending module is used for sending a first response to the terminal equipment, wherein the first response is used for indicating the tag set and the skin IDs of the tags in the tag set;
the sending module is further configured to send a first response to the terminal device, where the first response is used to indicate the tag set and a skin ID of each tag in the tag set;
The receiving module is further configured to receive a second request from the terminal device, where the second request is used to request a configuration table, where the configuration table is used to indicate skin configurations corresponding to each skin ID, the skin configurations include rendering resources and rendering rules, and when the style corresponding to the skin ID includes multiple states, the skin configurations of the skin ID include rendering resources of each state;
the sending module is further configured to send a second response carrying the configuration table to the terminal device;
The processing module is further used for loading rendering resources in various states of each skin ID before the receiving module receives the second request from the terminal equipment, generating a background plate and a foot line according to the corresponding rendering resources for each state of each skin ID, determining a 2D display style and a 3D display style of the tag according to the background plate and the foot line, and generating the configuration table according to the rendering resources, the 2D display style, the 3D display style, the height and width position information of the icon and the height and width position information indicated by AR in different states of each skin ID.
12. An electronic device comprising a processor and a memory, wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1-9.
13. A computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the method steps of any one of claims 1 to 9.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410073331.1A CN117876639B (en) | 2024-01-17 | 2024-01-17 | Label rendering method, device, equipment and readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410073331.1A CN117876639B (en) | 2024-01-17 | 2024-01-17 | Label rendering method, device, equipment and readable storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117876639A CN117876639A (en) | 2024-04-12 |
| CN117876639B true CN117876639B (en) | 2024-12-24 |
Family
ID=90588212
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410073331.1A Active CN117876639B (en) | 2024-01-17 | 2024-01-17 | Label rendering method, device, equipment and readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117876639B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115311395A (en) * | 2021-04-20 | 2022-11-08 | 华为云计算技术有限公司 | Three-dimensional scene rendering method, device and equipment |
| CN115965731A (en) * | 2022-12-02 | 2023-04-14 | 达闼科技(北京)有限公司 | Rendering interaction method, device, terminal, server, storage medium and product |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| RU2678077C2 (en) * | 2017-05-04 | 2019-01-23 | Общество С Ограниченной Ответственностью "Яндекс" | Method for drawing search results on map displayed on electronic device |
| CN110866204B (en) * | 2018-08-10 | 2023-04-11 | 阿里巴巴集团控股有限公司 | Page processing method and device |
| CN111061414A (en) * | 2019-11-28 | 2020-04-24 | 北京奇艺世纪科技有限公司 | Skin replacement method and device, electronic equipment and readable storage medium |
| CN111443910A (en) * | 2020-03-24 | 2020-07-24 | 北京奇艺世纪科技有限公司 | Skin rendering method and device, electronic equipment and computer storage medium |
| CN113656518A (en) * | 2020-05-12 | 2021-11-16 | 奇安信科技集团股份有限公司 | Map rendering method and device, electronic equipment and storage medium |
| CN111833454B (en) * | 2020-06-30 | 2023-11-28 | 北京市商汤科技开发有限公司 | Display method, device, equipment and computer readable storage medium |
| CN116402937A (en) * | 2023-03-30 | 2023-07-07 | 中国舰船研究设计中心 | A simplified development method for 3D visualization of complex data based on web |
-
2024
- 2024-01-17 CN CN202410073331.1A patent/CN117876639B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115311395A (en) * | 2021-04-20 | 2022-11-08 | 华为云计算技术有限公司 | Three-dimensional scene rendering method, device and equipment |
| CN115965731A (en) * | 2022-12-02 | 2023-04-14 | 达闼科技(北京)有限公司 | Rendering interaction method, device, terminal, server, storage medium and product |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117876639A (en) | 2024-04-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9720658B2 (en) | Application creation method and apparatus | |
| US8711147B2 (en) | Method and system for generating and displaying an interactive dynamic graph view of multiply connected objects | |
| US8682964B1 (en) | Progressively loading network content | |
| US9886465B2 (en) | System and method for rendering of hierarchical data structures | |
| CN110968314B (en) | Page generation method and device | |
| EP2711846A1 (en) | Method and device for processing template file | |
| WO2016101754A1 (en) | Method and device for web page switching, and device for providing integrated page | |
| JP6975339B2 (en) | Backdrop rendering of digital components | |
| CN109857964B (en) | Thermodynamic diagram drawing method and device for page operation, storage medium and processor | |
| CN104850388A (en) | Method and apparatus for drafting webpage | |
| CN108228816A (en) | A kind of loading method and device of waterfall flow graph piece | |
| US20170097749A1 (en) | Integrating applications in a portal | |
| CN112150603A (en) | Initial visual angle control and presentation method and system based on three-dimensional point cloud | |
| US9646362B2 (en) | Algorithm for improved zooming in data visualization components | |
| CN116645493A (en) | Method, device and medium for presenting augmented reality data | |
| CN117093386B (en) | Page screenshot method, device, computer equipment and storage medium | |
| CN116010736B (en) | Methods, devices, equipment, and storage media for processing vector icons. | |
| CN117876639B (en) | Label rendering method, device, equipment and readable storage medium | |
| CN113918267B (en) | Map interaction method and device, electronic equipment and storage medium | |
| CN110020320A (en) | The method and apparatus for caching page pictures | |
| CN116775916A (en) | Multimedia content playing method, device, equipment and storage medium | |
| CN106155455B (en) | Object control method and device in interface | |
| CN103970763A (en) | Three-dimensional image displaying system and method | |
| CN116627417A (en) | Data rendering method and device, electronic equipment and storage medium | |
| CN112286576A (en) | Cross-platform rendering method, client and medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |