Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Terms or concepts related to the embodiments of the present application are explained below.
Conventional containers-containers that implement runtime isolation using Control Group (Cgrou) and Namespace (Namespace) techniques. Is characterized in that the container shares the kernel with the host. Wherein, cgroup is a mechanism provided by the kernel of the Operating System (OS) for managing and controlling the resources of the process group, and can limit, record and isolate the physical resources used by the process group. Physical resources may include processor resources, memory resources, disk Input/Output (I/O) resources, and the like. It allows a system administrator or process scheduler to assign processes to different resource control groups (Cgroups) and then set resource-specific limits, priorities, billing, etc. for those resource control groups. Cgroup is one of the core components of container technology. Cgroup can improve the resource utilization rate and isolation in server and container environments, and is one of the bases for realizing lightweight virtualization technology.
Security container-Security container is an enhanced container technology that provides a higher level of security isolation than conventional containers. The secure container provides greater security and isolation by introducing a virtualization layer than conventional containers. The secure container typically completely isolates the running environment of the container from the kernel of the host by introducing an isolation mechanism for lightweight virtualization techniques. This isolation ensures that applications within the container do not directly affect the operation of the host and other containers, reducing the risk of security vulnerabilities. The secure container uses a separate virtualized kernel instead of sharing the kernel with the host, thereby preventing potential kernel-level attacks. Each container has its own virtualized kernel instance so that even if security events occur inside the container, the hosts and other containers are not affected.
And the shared security container adopts an unbound CPU scheduling mode, each virtual CPU (VirtualCPU, vCPU) is randomly allocated to any idle physical CPU, and different vCPUs can contend for physical CPU resources.
And the flexible CPU bandwidth control is realized by accumulating unused CPU time in each scheduling period when the CPU is relatively idle and using the accumulated time during the high load period of the CPU, so that the originally set limit of the processor Quota (Quota) is broken through, and the service quality is improved. This approach is significant when dealing with CPU bursty tasks that have low average CPU utilization but very high CPU utilization for some periods of time.
CPU Quota (CPU quanta) processor Quota, such as a CPU Quota (CPU quanta) policy, is used to limit the total amount of time a container can use a processor (such as a CPU) in a period. When the container Limit (Limit) sets the processor resource, a processor Quota policy (e.g., CPU quantum) may be set in the resource control group (Cgroup) to Limit the processor resource usage of the container. Specifically, the container's Limit (Limit) may be translated into a processor Quota policy (e.g., CPU quanta) in the resource control group (Cgroup), thereby limiting the time that the container may use the processor in a certain period.
The reason why the secure container cannot use the flexible bandwidth control technique will be explained below.
The traditional container is excellent in resource management and scheduling, but because the traditional container shares the kernel with the host, once the traditional container is broken by malicious intent, an attacker can directly escape to the kernel of the host, so that all containers on the host are affected, and serious potential safety hazards are caused. Thus, for higher security requirements services, the current mainstream trend is to migrate to secure containers. The security container isolates the kernel of the container from the kernel of the host machine by introducing a lightweight virtualization layer, so that the mutual influence between the container and the host machine is avoided. This isolation mechanism significantly improves security, but at the same time presents new challenges.
In a secure container, each core of a virtual processor corresponds to a process on a host, and thus the tasks running within the container are limited by the number of virtual processor cores. This means that the container specification not only represents the upper limit of processor resources that the container can use, but also the number of cores of the virtual processor of the container. The cores of the virtual processor are referred to as logical cores. One physical core may be virtualized as one or more logical cores. The plural number is more than 2 (including 2).
Some container cluster management systems, such as Kubernetes (K8 s), allocate and control the CPU resources of the container using processor (e.g., CPU) request (request. CPU) and limit (limit. CPU) parameters. The request (cpu) represents the weight value of this container allocation slot, which is the minimum processor resource guarantee for the container. The request (CPU) refers to the minimum amount of CPU resources the container expects to obtain. The limit (limit. CPU) represents a hard upper limit of CPU time that the container can use. For example, request.cpu=1, limit.cpu=4 indicates that the container can be scheduled when there is more than 1 part of idle CPU time on the host, but the upper limit of CPU time available for the container does not exceed 4 parts of CPU time. The number of cores of a virtual processor (vCPU), which may also be referred to as the number of virtual processors, is determined when creating a secure container using two parameters, request. The calculation method is as follows:
nr_vcpu = max(request.cpu, limit.cpu)(1)。
where nr_ vcpu represents the number of cores of the virtual processor and max (request. Cpu, limit. Cpu) represents the maximum value of the fetch request (request. Cpu) and limit (limit. Cpu).
Further, a control plane of the secure container, such as a run time (run time) component, may create a virtual machine based on the calculation, where the number of cores of the processor is equal to nr_ vcpu, and each core of the virtual processor corresponds to a process on a host. Among other things, a Runtime (run time) component is responsible for managing the lifecycle of containers, including creating, starting, stopping, and destroying containers. The container is a stand-alone operating environment created from the container image at the time of container operation.
The secure container control plane then sets a processor Quota (Quota) for the secure container, limiting the upper processor (e.g., CPU) time limit that it can use, as calculated by the following formula:
cpu.quota = nr_vcpu * cpu.period(2)。
Where CPU. Quota represents the upper processor time limit that the container can use, CPU. Period represents the CPU scheduling Period (Period), equal to one CPU time.
By combining the two formulas, when the secure container control plane (such as a runtime component) creates the secure container, the secure container control plane directly maps the processor Quota (Quota) with the cores of the virtual processor one to one, which directly limits the unused CPU time accumulated when the CPU of the secure container is relatively idle, and cannot be consumed when the CPU is busy. Since there is no redundant virtual processor core to consume the previously accumulated unused CPU time while the CPU of the secure enclosure is busy, the flexible CPU bandwidth control technique cannot be effective in the secure enclosure. However, the average CPU utilization of the secure container is not high, and CPU burst-like loads where CPU utilization is high for some periods of time are ubiquitous. Therefore, in order to improve the service quality of the CPU burst load in the secure container, the secure container needs to support the flexible CPU bandwidth control technology.
In some embodiments of the present application, in order to implement flexible processor bandwidth control in a secure container, by expanding virtual processor cores allocated to a conventional secure container, the limitation that a limited use time length (Quota) and a virtual processor core need to be one to one in the conventional secure container is broken, and when the secure container is under high load, the expanded virtual processor core is utilized to consume unconsumed time length accumulated in a historical scheduling period, so as to implement flexible processor bandwidth control, thereby being beneficial to meeting sudden load requirements and improving service quality of the secure container.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
It should be noted that like reference numerals refer to like objects in the following figures and embodiments, and thus once an object is defined in one figure or embodiment, no further discussion thereof is required in the subsequent figures and embodiments.
Fig. 1a is a flow chart of a method for creating a container according to an embodiment of the present application. As shown in fig. 1a, the method mainly comprises the following steps:
101. A container creation request is obtained, the container creation request being for requesting creation of a secure container, the container creation request including a resource request amount and a resource restriction amount.
102. The number of virtual processor cores of the secure container is determined as a first number based on the resource request amount and the resource limit amount.
103. The first number is expanded to obtain a target number.
104. A virtual machine instance is created on the host machine based on the target number of virtual processor cores.
105. A secure container is created on the virtual machine instance to perform tasks with a target number of virtual processor cores consuming unconsumed processor time accumulated before the secure container in the event that the load of the virtual processor of the secure container satisfies a high load condition.
In an embodiment of the application, a container creation Request is used to Request creation of a secure container, including a resource Request amount (Request) and a resource Limit amount (Limit). Wherein the resource Request amount (Request) is at least the available resource amount of the secure container, and is the minimum resource guarantee of the secure container. The resource Request amount (Request) refers to the minimum amount of resources that the container expects to obtain. The scheduler will allocate sufficient resources to the secure container based on its resource Request amount (Request). The maximum resource information that the secure container can use may be determined using the resource Limit amount (Limit) of the secure container. The resource Limit amount (Limit) of a container refers to the maximum amount of resources that the container can use. The resource usage of the container must not exceed this upper limit. The resource request amount includes the aforementioned processor request (request. Cpu) and the resource limit amount includes a processor limit (e.g., limit. Cpu).
In some embodiments, the container creation request may include a configuration file, such as yaml file, of the secure container to be created. The yaml file includes the aforementioned resource request amount and resource limitation amount. The yaml document, among other things, provides a compact way to describe the configuration data structure of a secure container, which is well suited for configuring and managing containers.
After the container creation request is obtained in step 101, the number of virtual processor cores of the secure container may be determined based on the resource request amount and the resource limitation amount in step 102. Specifically, the maximum value may be selected from the processor request and the processor limit as the virtual processor core number of the secure container, that is, nr_ vcpu =max (request. Cpu, limit. Cpu). The virtual processor core number of the secure container may be referred to herein as the standard virtual processor core number of the secure container.
In this embodiment, to enable the secure container to support the processor flexible control technique, the standard virtual processor core number of the secure container may be extended, and the extended virtual processor core number may be used to consume the processor time accumulated when the processor of the secure container is relatively idle when the secure container is under high load. Thus, in step 103, the number of virtual processor cores of the secure container may be expanded to obtain the target number. The target number refers to the number of virtual processor cores after expansion. In the embodiment of the present application, for convenience of description and distinction, the number of the standard virtual processor cores of the secure container determined in step 102 is defined as a first number, and the number of the virtual processor cores expanded from the number of the standard virtual processor cores is defined as a second number, and then the target number is equal to the first number plus the second number.
In the embodiment of the present application, a specific implementation manner of expanding the first number is not limited. In some embodiments, a user of the secure container may specify a multiple of expansion (burst_ vcpu _factor). Accordingly, the container creation request may include an expansion multiple. The expansion factor may also be located in a configuration file of the secure container, such as yaml files. Namely, parameters corresponding to expansion multiples are added in the configuration file, and a user can assign expansion multiples by assigning values to the parameters. Wherein the expansion multiple is greater than or equal to 1. Typically, the expansion factor is an integer. Since the secure container is a common secure container when the expansion multiple is equal to 1, and the bandwidth control technology of the elastic processor cannot be used, the embodiment of the application mainly aims at the expansion multiple larger than 1 to be described. Based on the expansion times, step 103 may be implemented by expanding the first number according to the expansion times to obtain the target number. Specifically, the product of the first number and the expansion multiple may be taken as the target number. Accordingly, the target number may be expressed as:
nr_burst_vcpu = nr_vcpu * burst_vcpu_factor ,burst_vcpu_factor≥1(3)。
where nr_burst_ vcpu represents the target number of virtual processor cores and burst_ vcpu _factor represents the expansion factor.
In this embodiment, whether to use the container creation method provided by the embodiment of the present application to create the secure container may be determined according to whether the configuration file of the secure container carries a parameter (such as burst_ vcpu _factor) corresponding to the expansion multiple.
In other embodiments, the set number may be increased on the basis of the first number to obtain the target number. Or the first number may be multiplied by a set multiple to obtain the target number. Wherein the set multiple is greater than 1. In these embodiments, parameters for indicating whether to create a secure container using the container creation method provided by the embodiments of the present application may be added to the configuration file of the secure container. If the value of the parameter characterizes that the safe container is created by using the container creation method provided by the embodiment of the application, and if the value of the parameter characterizes that the safe container is not created by using the container creation method provided by the embodiment of the application, the safe container is created by using the traditional safe container creation method provided by the embodiment of the application.
Further, in step 104, a virtual machine instance may be created on the host machine according to the target number of virtual processor cores. Specifically, kernel-based virtual machine (Kernel-based Virtual Machine, KVM) technology may be utilized to create a virtual machine on a host machine and configure the virtual machine with a target number of virtual processor cores and other resources to obtain a virtual machine instance. Other resources include, but are not limited to, memory resources, persistent storage resources, network resources, and the like. The KVM is a virtualization technology under an operating system (such as Linux system), and can convert the operating system into a virtual machine monitor program, so that a host computer can run multiple isolated virtual environments.
Further, in step 105, a secure container may be created on the virtual machine instance such that the virtual processor of the secure container has a target number of virtual processor cores. Creating a secure container on a virtual machine instance may run a separate kernel for the secure container to ensure the reliability of the secure container.
Specifically, a container image of a secure container to be created may be loaded on a virtual machine instance, and the secure container may be created on the virtual machine instance based on the container image. Specifically, code in the container image may be run on a virtual machine instance created by the KVM, thereby creating a secure container on the virtual machine instance such that the secure container runs on a separate kernel created by the KVM.
It should be noted that, the secure container according to the embodiments of the present application may be a shared secure container or an exclusive secure container. For the shared secure container, each virtual processor core in the secure container is a process, and the physical virtual processor cores are not bound, and the upper processor time limit available to the secure container is limited by the processor quota (cpu. Quota), so the waste of processor resources is low, or even no processor resource is wasted. Each virtual processor core in the exclusive type secure container is bound to one physical virtual processor core, and the processor time scheduling method provided by the embodiment of the application introduces higher processor resource waste if applied to the exclusive type secure container. Therefore, the secure container in the embodiment of the application mainly refers to a shared secure container. In some embodiments of the present application, in addition to obtaining the resource request amount, the resource limitation amount, and the expansion multiple from the configuration file of the secure container, the secure container may be configured to be of a shared type or an exclusive type in the configuration file of the secure container. Accordingly, as shown in fig. 1b, whether the secure container is a shared secure container may also be obtained from the configuration file of the secure container. If yes, the method for creating the container provided by the embodiment of the application is adopted to create the secure container, namely, the virtual processor core number of the secure container, namely, the first number is determined according to the resource request amount and the resource limitation amount shown in fig. 1 b. Specifically, the above formula (1) can be used for determination. Further, the number of virtual processor cores of the secure container may be further extended by using the foregoing formula (3) to obtain the target number of virtual processor cores. The above equation (2) may also be used to determine the limited use time of the secure container (e.g., cpu. Quota). Further, a secure container may be created based on the target number of virtual processor cores, and a limited use time of the secure container (e.g., cpu. Quota) may be set. Accordingly, if the secure container is an exclusive secure container, the secure container may be created using the secure container creation method shown above. Specifically, the number of virtual processor cores of the secure container, i.e., the first number, may be determined using equation (1) above, and the limited use time length (e.g., cpu. Quota) of the secure container may be determined using equation (2) above. Further, a secure container may be created based on the first number of virtual processor cores, and a limited use time of the secure container (e.g., cpu. Quota) may be set.
For the created shared secure container, the shared secure container has a target number of virtual processor cores, wherein the target number is larger than the number of virtual processor cores determined according to the resource request amount and the resource limit amount, and a precondition is provided for using the flexible processor bandwidth control when the load of the virtual processor of the secure container meets the high load condition.
In some embodiments, the flexible processor bandwidth control techniques may be supported by a default secure container. In other embodiments, the flexible processor bandwidth control technique may be enabled in the secure container after the secure container is created. Since the secure container introduces a virtualization layer, the secure container is isolated from the host kernel. Therefore, it is necessary to synchronously enable the flexible processor bandwidth control technology at both the secure container and the host, so that the flexible processor bandwidth control technology can be effective in the secure container.
In particular, a specific command line (cmdline) parameter may be added to a configuration file (e.g., yaml file) of the secure container to instruct a client operating system (Guest OS) to enable flexible processor bandwidth control techniques. The client operating system refers to an operating system of a virtual machine instance where the secure container is located. The command line (cmdline) parameters refer to command line parameters that are passed to the kernel at startup, and are configuration options passed at startup of the program or kernel. Upon startup of the secure container, the flexible bandwidth control technique may be enabled by command line parameters. Specifically, a command line, such as "elastic_cpu=on," may be added to the configuration file of the secure container to instruct the client operating system to initiate the elastic processor bandwidth control function. In creating the secure container, the command line is passed to the client operating system through a container management tool (e.g., KVM). The client operating system may initiate an elastic processor bandwidth control function based on the command line, enabling the processor elastic control function at the secure container level.
At the level of the host machine, the virtual processor core expansion mechanism of the secure container is used for realizing. Specifically, during the creation of the secure container, the host's operating system may initiate the flexible processor bandwidth control function via a command line.
After the flexible processor bandwidth control function is enabled at both the secure container and host level, the flexible processor bandwidth control can be verified by the expanded target number of virtual processors. The following specifically describes a process of implementing bandwidth control of an elastic processor by using the processor time scheduling method provided by the embodiment of the present application.
Fig. 2 is a flowchart of a processor time scheduling method according to an embodiment of the present application. As shown in fig. 2, the method mainly comprises the following steps:
201. And under the condition that the load of the virtual processor of the safety container meets the high load condition, acquiring the unconsumed first duration accumulated by the virtual processor of the safety container in the historical scheduling period.
The virtual processor is provided with a target number of virtual processor cores, wherein the target number is obtained by expanding the virtual processor cores on the basis of a first number, and the first number is determined according to the resource request amount and the resource limitation amount of the security container. The determination of the first number and the target number may be referred to in the foregoing embodiment, and will not be described herein.
202. The task is performed using the physical processor by consuming a first time period by a target number of virtual processor cores.
In this embodiment, for any scheduling period X, a target number of virtual processor cores may be invoked to execute the tasks of the scheduling period X, and the total time T x consumed by the target number of virtual processor cores to complete the tasks of the scheduling period X is counted. If the total time consumed by the target number of virtual processor cores to complete the task of the scheduling period X is smaller than the limit use duration, increasing the unconsumed time of the target number of virtual processor cores in the scheduling period X on the basis of the accumulated unconsumed time to obtain unconsumed accumulated duration corresponding to the scheduling period X.
Wherein, the limited use duration refers to the upper time limit that the secure container can use the physical processor, which is equal to the first number multiplied by the call period, namely:
cpu.quota = nr_vcpu * cpu.period(4)。
where cpu. Quota refers to the upper processor time limit that the secure container can use, i.e., the limit duration of use.
The time T x that the target number of virtual processor cores did not consume in scheduling cycle X is equal to the limited use time minus the total time T x that the target number of virtual processor cores consumed to complete the tasks of scheduling cycle X. I.e. t x=cpu.quota-Tx. Accordingly, the unconsumed cumulative time corresponding to the schedule period XCan be expressed as: . i denotes the i-th scheduling period. Representing the time that the target number of virtual processor cores did not consume in the ith scheduling period X.
Accordingly, if the total time consumed by the target number of virtual processor cores to complete the task of the scheduling period X is greater than or equal to the limited use time, determining that the load of the virtual processor of the secure container meets the set high load condition, and using the processor time scheduling method provided by the embodiment of the present application in the next scheduling period (x+1) of the scheduling period X.
For a previous scheduling period (k-1) of the current scheduling period k, if the total time consumed by the target number of virtual processor cores to complete the task in the previous scheduling period (k-1) reaches the limit use corresponding to the safety container, determining that the load of the virtual processor of the safety container meets the set high load condition. Accordingly, in this embodiment, the set high load condition is implemented such that the total time consumed by the target number of virtual processor cores to complete the task in the previous scheduling period (k-1) reaches the limited use duration corresponding to the secure container.
In other embodiments, the processor utilization of the secure container may also be monitored. Wherein processor utilization may reflect processor load water level. And if the processor utilization rate of the secure container is greater than or equal to the set utilization rate threshold value, determining that the load of the virtual processor of the secure container meets the set high load condition. Accordingly, in this embodiment, the set high load condition is implemented such that the processor utilization of the secure container is greater than or equal to the set utilization threshold.
Further, the processor time scheduling method provided by the embodiment of the application can be used in the current scheduling period k. Specifically, the unconsumed duration of the virtual processor of the secure container accumulated over the historical schedule period may be obtained. In the embodiment of the present application, a scheduling period before a current scheduling period is referred to as a history scheduling period. If the current scheduling period k is the first scheduling period after detecting that the load of the virtual processor of the secure container meets the set high load condition, the unconsumed duration of the virtual processor of the secure container accumulated in the historical scheduling period may be expressed as: I.e. the unconsumed duration that the virtual processor of the secure container accumulated over the historical schedule period is equal to the unconsumed accumulated duration corresponding to the (k-1) th schedule period.
If the current scheduling period k is other scheduling periods after the load of the virtual processor of the safety container is detected to meet the set high load condition, the unconsumed duration of the virtual processor of the safety container accumulated in the historical scheduling period is equal to the unconsumed accumulated duration corresponding to the scheduling period when the load of the virtual processor of the safety container is detected to meet the set high load condition, and the duration which is consumed in the scheduling period after the load of the virtual processor of the safety container is detected to meet the set high load condition and exceeds the limit use duration (such as cpu. Quota) corresponding to the safety container is subtracted. The limited use time (e.g., cpu. Quota) refers to the upper time limit for the secure container to use the physical processor.
For example, assuming that the scheduling period in which the load of the virtual processor of the secure container is detected to satisfy the set high load condition is the (k-j) th scheduling period, the unconsumed accumulated time period corresponding to the (k-j) th scheduling period may be expressed asThe unconsumed duration of the virtual processor of the secure container accumulated during the historical schedule period is equal to. Wherein, Indicating the length of time that the ith scheduling period consumes beyond the corresponding limit use time of the secure container (e.g., cpu. Quota).
Further, tasks may be performed using physical processors by consuming unconsumed time periods accumulated by the virtual processors during historical scheduling periods through a target number of virtual processor cores.
In this embodiment, by expanding the logic core (i.e., the virtual processor core) allocated to the conventional secure container, the limitation that the limited use time length (Quota) and the virtual processor core need one to one in the conventional secure container is broken, and when the secure container is under high load, the expanded virtual processor core, i.e., the expanded logic core, consumes the unconsumed duration accumulated in the history scheduling period, so as to realize the bandwidth control of the elastic processor, thereby being beneficial to meeting the burst load requirement and improving the service quality of the secure container.
Specifically, for the current scheduling period k, the duration that the virtual processor may overrun the physical processor in the current scheduling period k may be acquired from the unconsumed duration accumulated by the virtual processor in the historical scheduling period. In the embodiment of the application, for convenience of description and distinction, the unconsumed duration accumulated by the virtual processor of the secure container in the historical scheduling period is defined as a first duration, and the duration in which the physical processor can be used by the virtual processor in the current scheduling period k in an overrun manner is defined as a second duration T k2.
In some embodiments, the second time period T k2 may be determined according to the expansion multiple and the limited use time period corresponding to the secure container. Wherein the expansion factor is equal to the target number divided by the first number. Specifically, the product of the expansion multiple and the limited use time length (Quota) can be calculated, and the total time length of the physical processor can be used by the virtual processor of the safety container in the current scheduling period k, further, the difference between the total time length of the physical processor can be used by the virtual processor of the safety container in the current scheduling period k and the limited use time length can be calculated, and the time length of the physical processor can be used by the virtual processor in the current scheduling period k in an overrun mode, namely, the second time length T k2 can be calculated. Because the number of the virtual processor cores of the safety container is increased by the expansion times, the total duration that the virtual processor of the safety container can use the physical processor in the current scheduling period k is also increased by the expansion times compared with the limited use duration (Quota), so that the virtual processor with multiple expansion has enough physical processor time to execute tasks, and the overall performance is improved.
Further, the second time period T k2 may be acquired from the unconsumed first time period accumulated by the virtual processor of the secure container in the historical schedule period. Then, the second time period T k2 may be added to the limited use time period (e.g., cpu. Quota) corresponding to the secure container, so as to obtain the target time period. Where target duration = cpu.
Further, a target number of virtual processor cores may be co-allocated with a target time period for the target number of virtual processor cores to perform tasks of the current scheduling cycle k using the physical processor within the allocated time period.
Specifically, the sum of the time allocated by the target number of virtual processor cores may be equal to the target duration as a constraint condition, and the duration to be allocated corresponding to each of the target number of virtual processor cores may be determined according to the resource requirement of the task executed by each virtual processor core in the current scheduling period.
Alternatively, for each virtual processor core, the processor time required for that virtual processor core to execute a task may be collected during the current scheduling period k. The resource requirements may be expressed in terms of a percentage of processor time, an absolute value of processor time, or other units of measure. Further, the weight of the target number of virtual processor cores may be calculated based on the processor time required for the target number of virtual processor cores to execute the task. The weight for any virtual processor core may be equal to the processor time required by that virtual processor core to execute the task, as a percentage of the sum of the processor time required by the target number of virtual processor cores to execute the task. Further, according to the weight of the virtual processor cores in the target number, the duration to be allocated corresponding to each virtual processor core in the target number can be determined. The time length to be allocated corresponding to any virtual processor core is equal to the weight of the virtual processor core multiplied by the target time length.
Further, the corresponding time of the duration to be allocated can be allocated to each virtual processor core to allocate the time of the target duration to the virtual processor cores of the target number, so that each virtual processor core can execute the task of the current scheduling period k in the time allocated to each virtual processor core, the flexible processor bandwidth control of the safety container is realized, the burst load requirement is met, and the service quality of the safety container is improved.
After the time of the target duration is allocated to the virtual processor cores of the target number, the second duration can be subtracted from the stored first duration to serve as an unconsumed new duration of the virtual processor accumulated in the historical scheduling period, so that the next scheduling period can schedule the processor time based on the new duration until the accumulated unconsumed duration is consumed. After the accumulated unconsumed time period is consumed, the total time allocated by the virtual processor cores with the target number is equal to the time limit for the use time period as a constraint condition, and the time limit for the use time period is allocated for the virtual processor cores with the target number. In this way, each virtual processor core may execute tasks of a corresponding scheduling cycle in the respective allocated time.
In some embodiments, the unconsumed first duration of the virtual processor of the secure container acquired in the current scheduling period k may be less than the determined second duration T k2, and the first duration may be increased based on the limited use time length (cpu. Quota) corresponding to the secure container, which is a duration (defined as a fourth duration) shared by the virtual processor cores in the current scheduling period as a target number. Further, a target number of virtual processor cores may be co-allocated a fourth duration of time. In this way, the virtual processor cores with the target number execute the task of the current scheduling period k by using the physical processor in the allocated time, so that the flexible processor bandwidth control of the safety container is realized, the burst load requirement is met, and the service quality of the safety container is improved.
For the specific embodiment of co-allocating the time of the fourth duration to the target number of virtual processor cores, reference may be made to the foregoing related content of co-allocating the time of the target duration to the target number of virtual processor cores, which is not described herein.
Since the flexible processor bandwidth control technique allows for the accumulation of unused processor time per scheduling period for use during high processor load, the limitation of the secure container's processor quota (cpu. Quota) is broken through during high load, improving the secure container's quality of service. But this may exacerbate the contention of processor resources by containers on the same host, thereby affecting the quality of service of neighbor containers of the secure container. The neighbor containers of the secure container refer to other containers deployed on the same host machine with the secure container, and share the physical processor of the host machine with the secure container. Therefore, the processor quota (cpu. Quota) of the neighbor container can be preferentially ensured to be satisfied, and the requirement of the safety container for breaking through the processor quota (cpu. Quota) is further satisfied on the basis. Based on this, in some embodiments of the present application, as shown in fig. 3, a scheduling weight dynamic adjustment mechanism of the security container is introduced.
In particular, the scheduling weights for each container deployed on a host (including security containers and other containers deployed on the host) may be preconfigured. The scheduling weights of the pre-configured containers may be the same or different. In some embodiments, the scheduling weights for each container, which may be preconfigured, are the same, equal to 1/N. N represents the total number of containers deployed on the host, including the secure container and other containers deployed on the host. In other embodiments, the scheduling weights for each container may be configured according to the limited duration of use of each container deployed on the host. Specifically, for any container a, the ratio of the limited use time length of the container a to the sum of the limited use time lengths of the containers may be calculated as the scheduling weight of the container a. Or the limited use time length of the container a may be used as the scheduling weight of the container a. The scheduling weight of the container configured as described above is the initial scheduling weight of the container.
Under the condition that the virtual processor load of the safety container meets the set high load condition, the dispatching weight of the safety container can be reduced to obtain the target dispatching weight of the safety container. The target scheduling weight is smaller than the initial scheduling weight of the security container, and the target scheduling weight is smaller than the scheduling weights of other containers deployed on the host.
The target scheduling weight may be preset. The target scheduling weight is less than the scheduling weights of other containers deployed on the host. In this way, the scheduling weight of the secure container can be set to a target scheduling weight set in advance in the case where the virtual processor load of the secure container satisfies the set high load condition. The target scheduling weight is less than the initial scheduling weight of the secure container.
Or the scheduling weight of the safety container can be reduced by a set gradient under the condition that the virtual processor load of the safety container meets the set high load condition, so as to obtain the target scheduling weight of the safety container. Namely, under the condition that the virtual processor load of the safety container meets the set high load condition, subtracting the set gradient from the current scheduling weight of the safety container to obtain the target scheduling weight of the safety container. Assuming that the gradient is set to 0.05, subtracting 0.5 from the current scheduling weight of the safety container to obtain the target scheduling weight of the safety container.
Further, the duration (defined as the third duration) that the secure container uses the physical processor in the current scheduling period k may be determined according to the target scheduling weight of the secure container and the scheduling weights of other containers deployed on the host. Specifically, after calculating the ratio P of the sum of the target scheduling weight and the scheduling weight of each container deployed on the host, calculating the product of the ratio P and the scheduling Period (Period), to obtain the third duration of the secure container using the physical processor in the current scheduling Period k.
Further, if the third duration is less than or equal to the limited use duration (e.g., cpu. Quota) corresponding to the secure container, it may be determined that the total duration of the secure container available for use by the physical processor in the current scheduling period k is the limited use duration, and then the time of the limited use duration may be allocated for the target number of virtual processor cores. In this way, a target number of virtual processor cores of the secure container may perform tasks of the current scheduling cycle k within a time that limits the duration of use.
Correspondingly, if the third time length is longer than the limited use time length (e.g., cpu. Quota) corresponding to the secure container, the second time length for which the secure container may use the physical processor beyond the current scheduling period k may be determined according to the difference between the third time length and the limited use time length (e.g., cpu. Quota) corresponding to the secure container. Therefore, on one hand, the adjustment of the dynamic scheduling weight of the safety container can be considered, the processor quota of other containers is preferentially ensured to be met, the processor resource conflict between the safety container and the other containers is reduced, and the interference of the elastic processor bandwidth control technology to the neighbor containers is further reduced. On the other hand, the flexible processor bandwidth control technology can be continuously used in the secure container, the limit of the limited use time (namely the processor quota) corresponding to the secure container is broken through, and the service quality of the secure container is improved.
In some embodiments, the difference between the third time period and the corresponding limited use time period (e.g., cpu. Quota) of the secure container may be used as the second time period for which the secure container may overrun the physical processor during the current scheduling period k.
In other embodiments, the product of the expansion times of the virtual processor cores and the corresponding limited use durations of the secure containers may be calculated as the total duration that the secure containers may use the physical processors in the current scheduling period k. Wherein the expansion multiple is equal to the target number divided by the first number. Thereafter, a difference between the total duration that the secure container may use the physical processor for the current scheduling period k and the corresponding limited use duration of the secure container may be calculated. For convenience of description and distinction, a difference between a total duration of time that the secure container can use the physical processor at the current scheduling period k and a limited use duration (Quota) corresponding to the secure container is defined as a first difference, and a difference between a third duration and a limited use duration (e.g., cpu. Quota) corresponding to the secure container is defined as a second difference. Further, a minimum value may be selected from the first difference value and the second difference value as a second duration for which the physical processor may be overrun in the current scheduling period k for the secure container. The smaller time length is selected from the first difference value between the total time length of the safe container available physical processor in the current dispatching cycle k and the limited time length (Quota) corresponding to the safe container, and the second difference value between the third time length and the limited time length (Quota), the time length of the safe container available physical processor which is determined according to the dynamic dispatching weight of the safe container and the time length (Quota) of the limited use can be balanced, and the time length of the safe container available physical processor which is determined according to the product of the expansion multiple and the limited time length (Quota) is different from the time length of the safe container available physical processor which is determined according to the product of the expansion multiple and the limited time length (Quota), and the minimum value of the two time lengths can enable the time length of the safe container available physical processor which is used in an overrun mode to be still limited, so that resources cannot be excessively occupied.
After determining that the secure container can overrun the second duration of the physical processor in the current scheduling period k, the second duration can be obtained from the first duration, and the second duration is added on the basis of the corresponding limited use duration of the secure container, so that the target duration is obtained. Further, the time of the target duration can be allocated to the virtual processor cores of the target number, so that the virtual processor cores of the target number execute the task of the current scheduling period k by using the physical processor in the allocated time, the flexible processor bandwidth control of the safety container is realized, the safety container breaks through the limitation of the processor quota of the safety container, and the service quality of the safety container under the condition of processor load emergency is improved.
In practice, the secure container may in some cases require acquisition of processor information for some operations. For example, the secure container needs to acquire the number of virtual processor cores for resource initialization. When a process within a container starts, a kernel interface or system dispatch interface may be read for resource initialization. The kernel interface may be a/proc/cpuinfo interface or a proc/stat interface. The system call interface may be a/system/devices/system/cpu/online interface, a/system/devices/cpu/flush interface, a/system/devices/cpu/possible interface, or a/system/devices/system/cpu/present, etc.
Wherein/proc/cpuinfo is part of the operating system (e.g., linux system) and provides a mechanism to access information about the processor by the kernel. By reading the/proc/cpuinfo file, detailed information about the processors installed in the system can be obtained. Thus, an application within the container may read/proc/cpuinfo kernel interface obtain processor information to initialize a thread pool based on the number of virtual processor cores (e.g., CPU cores) available. The proc/stat contains data about various statistics in the system, including statistics of the processor (e.g., CPU).
The/systems/devices/systems/CPU/online is used to set or see which virtual processor cores (e.g., CPU cores) are online in the current system. the/systems/devices/systems/CPU/offset is used to set or see which virtual processor cores (e.g., CPU cores) in the current system are offline. The system/devices/system/CPU/possible shows the range of virtual processor cores (e.g., CPU cores) that all may be used in the system. the/systems/devices/systems/CPU/present show the range of virtual processor cores (e.g., CPU cores) that are actually present in the system.
Because the number of virtual processor cores of the secure container is expanded in the embodiment of the present application, when providing processor information to the secure container, multiple expanded (target number-first number) virtual processor cores need to be hidden, mainly because the number of virtual processor cores perceived by an application program located in a user state in the secure container is kept consistent with the processor quota (cpu. Quota) of the secure container, thereby avoiding confusion and inconsistency in resource management. The user-state application program will initialize and allocate resources based on the number of virtual processor cores that are visible. Resource management confusion may result if the number of virtual processor cores perceived by the user-state application does not agree with the processor quota (cpu. By hiding the extra expanded virtual processor cores, it can be ensured that the virtual processor cores visible to the container view are in a one-to-one relationship with the secure container's processor quota (cpu.
Based on this, in some embodiments of the application, a resource view of a first number of virtual processor cores may be provided to a secure container in response to an access operation of the secure container to processor information. The access operation of the secure container to the processor information may be implemented as a read operation to the aforementioned kernel interface or system call interface.
As shown in fig. 3, for convenience of description and distinction, in fig. 3, vCPU represents a virtual processor core, the vCPU visible to the view of the secure container is denoted as gvCPU, and the actual vCPU of the secure container is denoted as hvCPU. The number of virtual processor cores (vCPU number) determined by the resource request amount and the resource limitation amount of the container creation request in fig. 3 is 4, that is, the first number is equal to 4. The expansion factor is 2, i.e., the number of virtual processor cores (target number) of the actually created secure container is 8, such as hvCPU0-7 in fig. 3. Since multiple extended virtual processor cores are used to handle processor load burst applications within a secure container, the multiple extended virtual processor cores may be referred to as burst-type virtual processor cores, such as the burst-type vCPU in fig. 3. Here, the "burst-type vCPU creation mechanism" in fig. 3 refers to a secure container creation process provided by the foregoing container creation method. In FIG. 3, the number of virtual processor cores visible to the application view within the secure container is 4, gvCPU0-3 in FIG. 3.
To implement the hiding mechanism of the extended virtual processor cores, in some embodiments of the present application, as shown in fig. 3, the number of virtual processor cores actually within the secure container (i.e., the target number) is scaled by an expansion factor to obtain the number of virtual processor cores visible to the perspective of the secure container (i.e., the first number). For convenience of description, the expansion multiplier is denoted as f, the number of virtual processor cores actually in the secure container (i.e., the target number) is denoted as s, and the number of virtual processor cores visible in the secure container perspective (i.e., the first number) is denoted as s ', then there is s' =s/f. In fig. 3, s' =4, s=8, and f=2.
Specifically, as shown in fig. 4, a kernel of an operating system of a virtual machine (Guest OS kernel) may mark whether a virtual processor core is online using a bitmap. Each bit in the bitmap corresponds to a virtual processor core, 1 representing that the virtual processor core is on-line, and 0 representing that the virtual processor core to which the bit corresponds is off-line. Or 0 for virtual processor cores on-line, 1 for virtual processor cores off-line for the bit, etc. Hiding the multiple expanded virtual processor cores, as shown in FIG. 4, may be implemented to preserve bits [0, s ') in the bitmap, clear bits [ s', s) and thereby make the number of virtual processor cores visible to the secure container view a first number.
Based on this, a first bitmap corresponding to a target number of virtual processor cores may be determined in response to an access operation of the secure container to the processor information. The first bitmap is shown in fig. 4 (a). In fig. 4, a value of 1 for a bit indicates that the bit corresponds to a virtual processor core, and a value of 0 for a bit indicates that the bit does not correspond to a virtual processor core. Further, the first bitmap may be modified into a second bitmap according to the expansion multiple, wherein the number of target bits in the second bitmap is equal to the first number, and the target bits are bits representing the virtual processor cores. The second bitmap is shown in fig. 4 (b), where a bit of 1 in fig. 4 (b) represents a target bit. Further, the second bitmap may be provided to the secure container such that the secure container may be provided with a resource view of the first number of virtual processor cores. The secure container senses the first number of virtual processor cores, and can perform resource management and the like according to the first number of virtual processor cores, so that the additional expanded virtual processor cores can be hidden, the one-to-one relationship between the virtual processor cores visible from the container view angle and the processor quota (cpu. Quota) of the secure container can be ensured, and confusion and inconsistency in resource management are avoided.
In order to ensure isolation between different application programs, the different application programs can be bound to different virtual processor cores, so that resource disputes are avoided. When an application binds to a virtual processor core (i.e., gvCPU) from a secure container perspective, the application actually runs on the aforementioned expansion factor (i.e., f) hvCPU, thereby fully utilizing processor resources.
The idea of the Burst vCPU mapping mechanism is to map f hvCPU to one gvCPU. When an application or service is bound to one gvCPU, the application or service can actually run on f hvCPU, thereby fully utilizing CPU resources, and this process is transparent to the application or service. The mapping relationship is calculated according to the following formula:
(5)。
In equation (5), i represents the ith gvCPU, i.e., the ith virtual processor core from which the container view is visible. Assuming that hvCPU total (i.e., target number) s is 8 and the expansion multiple is 2, gvCPU 0= hvCPU0+ hvCPU4.
In particular, the bitmap may be used to maintain binding core information for an application. A bit of 1 indicates that the application can run on the virtual processor core corresponding to the bit, and a bit of 0 indicates that the application cannot run on the virtual processor core corresponding to the bit. When the application program sets the binding core information, the burst type vCPU mapping mechanism can scatter and bind the application to f (namely expansion times) hvCPU according to the relation in the formula, and when the application program reads the binding core information, the burst type vCPU mapping mechanism can aggregate f hvCPU to one gvCPU according to the relation in the formula.
Based on the burst-type vCPU mapping mechanism, the target virtual processor core to be bound and the process to be bound in the processor binding operation can be obtained in response to the processor binding operation of the secure container. The security container can read the kernel interface or the system call interface to perform the processor binding operation. The kernel interface may be/sys/fs/cgroup/cpuset/cpu, or/sys/fs/cgroup/cpuset/cpu/effective_ cpus, etc. The system call interface may be sched _ setaffinity or sched _ getaffinity, or the like.
Wherein/sys/fs/Cgroup/cpuset/cpu.cpus are used to designate a set of processor cores (e.g., CPU cores) that may be used by processes in a resource control group (Cgroup). the/sys/fs/Cgroup/cpuset/CPU et. Effective_ CPUs is used to show the set of processor cores (e.g., CPU cores) that may actually be used by a process in the current resource control group (Cgroup).
Sched _ setaffinity is a system call used to set the affinity of the processor (e.g., CPU) of the process. This means that it can specify on which processor cores (e.g. CPU cores) a process can only run. ched _ getaffinity is a system call for retrieving the affinity settings of the current processor (e.g., CPU) of the process.
Further, according to the expansion multiple f, f bits (i.e. expansion multiple) corresponding to the target bits corresponding to the target virtual processor core in the first bitmap may be determined. The f (i.e., expansion multiple) bits corresponding to the target bit corresponding to the target virtual processor core in the first bitmap may be determined according to the foregoing formula (1). As shown in fig. 5, the target bit corresponding to gvCPU0 is the bit corresponding to hvCPU0 and hvCPU4 in the f bits corresponding to the first bitmap. Further, the process to be bound and the virtual processor cores corresponding to the determined f expansion multiples can be bound, so that the process to be bound can be operated on the expansion multiples (i.e. f) virtual processor cores, processor resources are fully utilized, and the utilization rate of the processor resources is provided.
The multi-extended virtual processor core hiding and mapping mechanism provided by the foregoing embodiments. The mechanism is hidden into a multi-expansion virtual processor core for supporting the bandwidth control technology of the elastic processor, and the virtual processor core visible from the perspective of the container and the multi-expansion virtual processor core are reasonably mapped, so that the multi-expansion virtual processor is transparent to the application program in the secure container, and the application program in the secure container has no transformation cost.
It should be noted that the secure container provided by the embodiment of the present application may run various services, such as a database service, an online shopping service, a video service, or a cloud computing service, but is not limited thereto. The inventor applies the secure container provided by the embodiment of the application to the database, creates the secure container corresponding to the database by using the container creation method provided by the embodiment of the application, performs resource scheduling on the secure container corresponding to the database by using the processor time scheduling method provided by the embodiment of the application, and performs performance evaluation on the database by using the transaction processing performance committee benchmark test H (Transaction Processing Performance Council Benchmark H, TPC-H). Among them, TPC-H is a widely used benchmark test for evaluating the performance of database systems. TPC-H benchmarking encompasses a variety of complex query patterns, intended to simulate data analysis and reporting tasks in the real world.
The main indexes of TPC-H include (1) Duration (Duration) index, which refers to the total time required for completing all queries in TPC-H benchmark test, wherein the lower the Duration is, the faster the system completes all queries, and the higher the performance is, and (2) Throughput (Throughput) index, which refers to the workload that the system can complete in unit time, usually the number of queries completed in unit time, the higher the index is, the better the higher the Throughput is, and the system can process more queries in unit time, and the higher the performance is.
The test result shows that under the condition that the configuration of the database is unchanged, the safety container corresponding to the database is created by using the container creation method provided by the embodiment of the application, and the resource scheduling is performed on the safety container corresponding to the database by using the processor time scheduling method provided by the embodiment of the application, so that the Duration (Duration) index of TPC-H of the database is reduced by 32%, and the Throughput (Throughput) index is improved by 45%.
It should be noted that, the execution subjects of each step of the method provided in the above embodiment may be the same device, or the method may also be executed by different devices. For example, the execution subject of steps 201 and 202 may be device A, and for example, the execution subject of step 201 may be device A, the execution subject of step 202 may be device B, and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that the operations may be performed out of the order in which they appear herein or performed in parallel, the sequence numbers of the operations such as 201, 202, etc. are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the container creation method and/or the processor time scheduling method described above.
Embodiments of the present application also provide a computer program product comprising a computer program which, when executed by one or more processors, causes the one or more processors to perform the steps of the container creation method and/or the processor time scheduling method described above. In the embodiments of the present application, the specific implementation form of the computer program product is not limited. In some embodiments, the computer program product may be implemented as an Application (APP), applet, computer-side client, program module, plug-in, installation package, software development kit (Software Development Kit, SDK), an image file of an optical disc (e.g., an ISO file), software in the form of a plug-in or Software as a service (SaaS), etc., but is not limited thereto.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 6, the electronic device includes a memory 60a and a processor 60b. Wherein the memory 60a is used for storing a computer program.
The processor 60b is coupled to the memory 60a for executing a computer program for performing the steps of the container creation method and/or the processor time scheduling method provided by the foregoing embodiments. For the specific implementation of each step, reference may be made to the related description of the foregoing embodiments, which is not repeated herein.
In some alternative embodiments, as shown in FIG. 6, the electronic device may further include optional components such as a communication component 60c, a power supply component 60d, a display component 60e, and an audio component 60 f. Only a part of the components are schematically shown in fig. 6, which does not mean that the electronic device must contain all the components shown in fig. 6, nor that the electronic device can only contain the components shown in fig. 6.
In addition, the components within the dashed box in fig. 6 are optional components, not necessarily optional components, depending on the product form of the electronic device. The electronic device of the embodiment can be implemented as terminal devices such as a desktop computer, a notebook computer, a mobile phone or an internet of things device, or can be implemented as various server devices such as a traditional server, a cloud server or a server cluster.
In an embodiment of the present application, the memory is used to store a computer program and may be configured to store various other data to support operations on the device on which it resides. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The Memory may be implemented by any type or combination of volatile or non-volatile Memory devices, such as Static Random-Access Memory (SRAM), electrically erasable programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY, EEPROM), erasable programmable Read-Only Memory (ELECTRICAL PROGRAMMABLE READ ONLY MEMORY, EPROM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic or optical disk.
In an embodiment of the present application, the processor may be any hardware processing device that may execute the above-described method logic. Alternatively, the processor may be a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU) or a micro-control unit (Microcontroller Unit, MCU), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a Programmable array Logic device (Programmable Array Logic, PAL), a general-purpose array Logic device (GENERAL ARRAY Logic, GAL), a complex Programmable Logic device (Complex Programmable Logic Device, CPLD), or an advanced reduced instruction set (Reduced Instruction Set Compute, RISC) processor (ADVANCED RISC MACHINES, ARM) or a System on Chip (SoC), or the like, but is not limited thereto.
In an embodiment of the application, the communication component is configured to facilitate wired or wireless communication between the device in which it is located and other devices. The device in which the communication component is located may access a wireless network based on a communication standard, such as wireless fidelity (WIRELESS FIDELITY, WIFI), 2G or 3G,4G,5G, or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the Communication component may also be implemented based on Near Field Communication (NFC) technology, radio frequency identification (Radio Frequency Identification, RFID) technology, infrared data Association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, or other technologies.
In an embodiment of the present application, the display assembly may include a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD) and a Touch Panel (TP). If the display assembly includes a touch panel, the display assembly may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation.
In an embodiment of the application, the power supply assembly is configured to provide power to the various components of the device in which it is located. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the devices in which the power components are located.
In embodiments of the application, the audio component may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive external audio signals when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may be further stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, for a device with language interaction functionality, voice interaction with a user, etc., may be accomplished through an audio component.
It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, CD-ROM (Compact Disc Read-Only Memory), optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs, etc.), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory, random-Access Memory (RAM), and/or nonvolatile Memory in a computer-readable medium, such as read-only Memory (ROM) or flash RAM. Memory is an example of computer-readable media.
The storage medium of the computer is a readable storage medium, which may also be referred to as a readable medium. Readable storage media, including both permanent and non-permanent, removable and non-removable media, may be implemented in any method or technology for information storage. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase-Change Memory (PRAM), static Random Access Memory (SRAM), dynamic random access Memory (Dynamic Random Access Memory, DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash Memory or other Memory technology, compact disc read only Memory (CD-ROM), digital versatile disks (Digital Video Disc, DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.