Realizing Digital Beauty

                                                   “Make-up can only make you look pretty on the outside but it doesn’t help if you’re ugly on the inside. Unless you eat the make-up.”

Audrey Hepburn

Makeup Science

People try to enhance their personal image through makeup. Desired personal images are different for each person. Even though it involves the same person, the personal image can be different depending on the ambience (e.g., changes to the situation, moods, age, etc.). While the inventors of this innovation have been analyzing the psychology and reality, they established a system and have designed algorithms by analyzing the dynamic meanings of personal images. As a result, this invention has advantages in providing an optimal real-time makeup support by mutual mappings between personal image classifications and personal image rules.

This invention innovates traditional methods that can only provide an image of a standard model that does not reflect the user’s actual facial features. A makeup guide content of this invention can reflect the individual’s facial features. Therefore, the most accurate makeup support is possible. Moreover, in accordance with an example of an embodiment, depending on the present situations and the moods of the users and an execution of the moment of a makeup supporting application, the makeup guide content can vary.

Figure. 1 depicts an example of a conceptual system configuration. A makeup supporting application (100) is installed at a user’s device (10). And a makeup supporting method is implemented where the device (10) communicates with a service server (20) by executing the makeup supporting application (100).

Figure 1: System Configuration

A service server (20) contains hardware/software devices, by configuring services (e.g., service management, data processing and communications, database construction, security and authentication, face recognition solutions, etc.) through multiple servers. The service server (20) constructs multiple databases and manages it. A database (25) records the user’s information and manages it. A database (23) can be a content database that stores the content. Moreover, a database (21) can be a standard personal image database. A database dealing with miscellaneous logs, billing, product information, etc. can also be additionally configured. In a service server (20), a matcher (200) can be contained. A matcher (200) has a personal image classification table and it extracts standard personal image data from a standard personal image database (21) according to previously stored personal image classifications and personal image rules. And a makeup guide content is generated by matching the user’s personal image data that is sent from the user’s device (10). A service server (20) sends a generated makeup guide content to the user’s device (10) by a communication network.

A user device (10) can be installed by downloading the software application that a service server (20) provides, and when an installed application is executed, a processor has a computing functionality that calculates the previously determined functions and implements it. A mobile device such as a smartphone and a tablet, a Personal Computer (PC), a laptop PC, etc. are desirably good as a user device (10). Although an illustration of Figure 3.1 is regarding a portable device such as a smartphone, this is restricted only for the convenience of explanation. That is, it cannot be limited to a portable device. As another embodiment, a device based on the Internet of Things (IoT) can be acceptable. A vanity table with a camera and a display panel may function as an optimal device. Furthermore, even if it does not exist right now, a new device with configurations may be included. For example, a makeup vending machine equipped with a camera and a display panel can be a device for the instance.

A makeup supporting application (100) residing on a device (10)’s memory that contains software modules and resources has a user interface (101) which visualizes makeup supporting methods by the execution of the modules onto a screen. As an example of an embodiment, a standard personal image database (103) can also be constructed at a user’s device. In the case of the execution of a makeup supporting application (100), by a user’s selection or an application’s setting, a camera module (13) of a device (10) can be enabled.

We now explain the major functions. A makeup supporting method can be desirably optimized so that a personal image classification table is previously established at a server and personal image classifications are given through the application screens.

A personal image classification is meant to be previously defined by at least one classification to respond to makeup purposes. Like the classifications in Table 1, a personal image classification can be defined in advance. And a personal image rule responding to the corresponding personal image classification can be defined. The personal image rule contains makeup pattern information for each facial area. For example, featured elements of an area like an eye, an eyebrow, a lip, a cheek, etc. can be registered as pattern information. The featured element registered as pattern information has an area classification, and data such as a vector, a color, a depth of color, types of beauty products for shapes formed on the face are assigned.

Table 1 shows a conceptual example of a personal image classification table as another desirable example of an embodiment.

Table 1 Conceptual example of Personal image classification table

Personal image classificationPersonal image rule
Area classificationPattern informationStandard personal image data
Young impressionArea 1Pattern 1Standard personal image data 1
Area 1Pattern 2 
Area 3Pattern 3 
 
Sexy impressionArea 1Pattern 11Standard personal image data 2
Area 2Pattern 22 
Area 3Pattern 33 
 
Intellectual impressionArea 1Pattern 111Standard personal image data 3
Area 2Pattern 222 
Area 3Pattern 333 
 
Cool impressionArea 1Pattern 1111Standard personal image data 4
Area 2Pattern 2222 
Area 3Pattern 3333 
 
Warm impressionArea 1Pattern 11111Standard personal image data 5
Area 2Pattern 22222 
Area 3Pattern 33333 
 
Natural impressionArea 1Pattern 1AStandard personal image data 6
Area 2Pattern 2B 
Area 3Pattern 3C 
 
Charming impressionArea 1Pattern 11AStandard personal image data 7
Area 2Pattern 22B 
Area 3Pattern 33C 
 
Physiognomic impressionArea 1Pattern 1AAStandard personal image data 8
Area 2Pattern 2AB 
Area 3Pattern 3AC 
 

As illustrated in Table 3.1, depending on an item per personal image classification, the pattern information corresponding to an area classification can be the same or it can be different (For convenience, in Table 3.1, all the pattern information is presented differently). Depending on the personal image classification item, the width of the eyebrow, the change of the eyebrow width, the orientation of the end of the eyebrow, the color of the eyebrow, the thickness of the eyebrow, etc. can be defined differently.

Standard personal image data can be generated by superposition and mapping between the pairs of elements belonging to a set of areas and a set of patterns. As an example of an embodiment, the standard personal image data can be registered as texts. In another desirable example of an embodiment, the standard personal image data can be generated from an image. In this case, the standard personal image data can be a pattern image. This standard personal image data is to be constructed as a database at the side of a service server, but it is also possible for the database to be constructed at the user device.

The personal image classification is only an illustration. Therefore, experts in the same business can define a personal image classification differently. On the other hand, in the physiognomic impression in the Table 3.1, more detailed items like wealth, popularity, marriage, fortune, etc. can be added according to the accumulated classifications by physiognomy.

The personal image classification consists of classifications depending on preferences of physiognomic impression. This can be reconfigured as classifications depending on the situations. For instance, personal image classification can be reconfigured as classifications depending on a party, a date, an interview, a professional career, travel, and moods, and recommendations according to the weather, the coordination with one’s outfit of the day, etc. Moreover, as another embodiment, the personal image classification can be a classification according to the celebrities’ styles that one wants to follow and imitate. A personal image classification field can be a model of a celebrity that is registered in advance. Various embodiments of personal image classification like the personal image rules can be defined as (area classification, pattern information, standard personal image data) depending on a personal image classification.

A personal image classification table explained by the depicted illustrations above is defined in advance. A makeup supporting application provides a user interface wherein a user can choose the personal image classification items.

Figure 2 shows the example of screen configurations for a personal image classification selection interface of a makeup supporting application.

Figure 2: Personal image classification selection interface of a user’s device.

In a personal image classification screen (110), the personal image classification items (111, 113, 115, 117) are included. Although the numbers are shown briefly in Figure 3.2, the texts corresponding to the naming of a personal image classification or icon images representing the features of a personal image classification can be used. And comment fields (111a, 113a, 115a, 117a) explaining the corresponding personal image classification per personal image classification item are additionally included. Surely, these fields are not necessarily required to exist.

A user can push a start button (110a) after choosing a personal image classification item by using an input means. Then, according to a chosen event, by calling personal image rules that are previously stored depending on the corresponding personal image classification, a makeup guide content can be generated.

As another embodiment, personal image rules can be called by various methods such as a method that is set as a default in an application installed to a device for a certain personal image classification item (a user can modify items in an environmental setting), a method that determines certain personal image classification items in a server, and a method that recommends certain personal image classification items by obtaining the user’s facial image and analyzing it, when a device’s camera module is activated by the execution of a makeup supporting application.

Figure 3 illustrates a method for generating a makeup guide content. The basic concept is to generate a makeup guide content by matching the user’s personal image data (199) and the standard personal image data (21) in real time via a matcher (200). The user’s personal image data (199) can be obtained from a device’s camera module. The standard personal image data (21) is previously registered.

Figure 3: Generating a makeup guide content

If the standard personal image data (21) is an image file, then matching a user’s face and a face of the standard personal image data requires special operation rules. A human being’s face consists of invariant elements and variant ones. The invariant elements are in regard to a physical shape of a part or the total of the face, such as a contour, a size, a position, etc. that is to be uniquely determined by an individual. Changing the invariant elements is the function of plastic surgery, not by makeup. Parts that are varied by makeup can be defined as variant elements. A personal image rule is regarding the changes of the variant elements. By considering this point, the standard personal image data (21) can be distinguished as a set of {invariant elements, variant elements}, and in the desired example of an embodiment, a set of {face shapes, patterns per area} can be regarded.

For instance, a matcher (200) can generate a makeup guide content (201) by applying a user’s face to an invariant element and an image of a standard personal image data to a variant element. Fig 3.4 explains these relations conceptually.

In Figure 4, ‘B’ is a user’s actual face image. ‘X’ is a pattern image, which is previously registered by a personal image classification. A matcher (200) is to superpose two different images and to map them. Desirably, each area, (X1, X2, X3, X4) is superposed and mapped to (B1, B2, B3, B4) as an instance. At this moment, invariant elements in a face are extracted from the featured elements of (B1, B2, B3, B4) and directly reflecting it, and the variant element in a face revises a user’s facial image using featured elements extracted from (X1, X2, X3, X4). By doing so, a generated image becomes a makeup guide image as a makeup guide content.

Figure 4: Superposing and mapping a pattern image (X) onto a user’s face (B)

By doing it this way, our invention innovates a traditional method wherein it does not reflect the user’s personal facial features, but it only provides a standard model’s image. As a desirable embodiment, a makeup guide content reflects the person’s facial features. Thus, the most precise makeup support is possible. Moreover, when a user’s actual face image can be different depending on the time and place in the execution of a makeup supporting application, an optimal makeup support is possible at the instant.

On the other hand, by applying personal image rules, in the case of generating a makeup guide image as a makeup guide content, rules of the following methods can be applicable.

A face can be defined as a set of areas where the face is partitioned into each area, and it can be represented as follows:

face = {area1, area2, …, arean}

A personal image rule can exist as various rules to be applied depending on personal image classification items and facial areas. A rule set consisting of each rule element can be defined as follows:

rule = {rule1, rule2, …, rulem}

As shown in the Table 3.1, the multiple pattern information, which is stored and managed by the databases, is registered as personal image classifications and facial areas. A specific personal image rule will be applicable to a specific area. If a condition of the rule is satisfied, then the corresponding rule is triggered and an area that matches to the corresponding rule is applied. At this moment, a makeup guide image with the corresponding area is modified. The following rule triggering operator ‘⇠’ can be used.

face_patterni = {area1 ⇠ rule1, …, arean ⇠ rule3}

Then, an image as standard personal image data depending on each face’s pattern information can be generated, and it can be represented as a pattern image. Then, it can be presented as the following expressions.

pattern_imagei = face_patterni

pattern_imagej = face_patternj

When a user continues to do makeup, a case with the makeup support can be considered. Then, multiple pattern images are generated, and a makeup guide image for each pattern image can be different. Therefore, depending on the sequence of makeup application, a pattern image can be accumulated into the sequence, and the accumulation concept can be represented using an operator ‘⇐’. If foundation is applied and then lipstick is used, then there exists a pattern_imagei where foundation is applied, and then there exists a pattern_imagej where lipstick is used. Here, a generating sequence of a makeup guide image firstly creates a guide_imagei by applying a pattern_imagei, prior to a pattern_imagej. Then, a pattern_imagej is sequentially applicable to generate a guide_imagej. The makeup guide image can be considered as an image, which is sequentially applied by each guide image as follows:

guide_imagei ⇐ pattern_imagei, guide_imagej ⇐ pattern_imagej

Let us review Figure 3.3 again. Regarding a personal image classification (230), a personal image classification table (220), a personal image rule (210) of Figure 3.3 was explained at Table 3.1. For instance, if the personal image classification (230) is determined depending on a user’s selected event, a makeup supporting application determines the personal image rule (210) by accessing the personal image classification table (220). Then, by extracting the standard personal image data (21) and by applying it to the personal image data (199) of a device, a makeup guide content (201) can be generated.

Like this, by matching standard personal image data that is already stored as a personal image rule to the user’s personal image data and applying it, a makeup guide content can be generated. In another example of an embodiment, the makeup guide content can be an image file. By extracting the featured elements from the user’s personal image data which is collected in real time, according to the previously defined personal image rules, at least one featured element is revised, and the revised user’s facial image is displayed by a screen of a makeup supporting application. The revised user’s facial image is a makeup guide image, and that is the makeup guide content as the example of an embodiment.

In another embodiment, the makeup guide content can be presented through texts or voices. By extracting the featured elements from the user’s personal image data that is collected in real time, the makeup guide content in a form of texts or voices corresponding to at least one featured element by previously defined personal image rules can be displayed by a device. The texts can be output by an application screen, and the voices through a speaker.

Figure 5 depicts screen scenarios from another embodiment. Suppose that a makeup supporting application was executed. Figure 5(a) shows a scenario, which applies personal image rules to a user’s face that is collected from a camera in real time. In the scenario, a makeup guide content is just displayed on the application screen.

To run a scenario like Figure 5(a), a makeup supporting application assists real time operations. Desirably, by executing the makeup supporting application, if a makeup support is requested, then it will communicate to a service server, where a matcher can match a user’s face to a guide image to be recommended in real time. The former is represented as a user’s personal image data, and the latter is presented as standard personal image data which is previously stored. In the camera area (121) of the application screens (120) of Figure 5(a), initially a user’s actual face is displayed like a mirror, after completing real time operation works of the matcher, it can be displayed as an image of a makeup guide content (A). At this moment, the makeup guide content (A) is a facial image revised by applying personal image rules depending on a personal image classification.

By pushing a button (129), a makeup guide content (A) can be captured. Then a user can do makeup by watching the corresponding image from a photo gallery.

In Figure 5(b), a scenario is illustrated where an application screen (120) is divided into two parts. Like a mirror of a large screen (122), a user’s face (B) collected from a camera lens is displayed. In a small screen, a makeup guide content (122a) generated by applying personal image rules is also displayed. In a scenario of Figure 5(b), a user can do makeup by seeing one’s own face like a mirror, and during the makeup process, by clicking a guide image recommended by an application and enlarging it, the makeup guide content can be referenced.

Figure 5(c) is a scenario that shows doing makeup by accessing a photo gallery stored in a memory, not by seeing the user’s face in real time via a camera lens.

A large screen area (123) of an application screen (120) in a user interface of Figure 5(c) is a user’s face (C) fetched originally from a photo gallery. Clicking a makeup button (125) can change a displayed face (C) on the screen. Namely, an application collects personal image data from the user’s facial images stored as image files in a device’s memory. Also, an application can generate a makeup guide content by applying previously stored personal image rule to the user’s personal image data according to a chosen personal image classification, and it can display the corresponding image (C) on the screen. On the other hand, by clicking a camera icon (127), a user can see one’s own face via a camera. Moreover, in that case, it can follow the scenarios of Figure 5(a) or 5(b).

A makeup guide content can be stored in a device. An application can see the makeup guide content (A) by enlargement or modification. Personal image rules are already registered as featured elements per facial area. Moreover, a matcher can match a user’s face to a guide image for each facial area. Therefore, the user can enlarge to see a makeup guide content (A) for each facial area or modify it.

Figure 3.5: A makeup guide content is displayed on a user’s screen

Let us look at Figure 6. An application screen (130) in Figure 6 can be shown that a makeup guide content (A) stored in a device’s memory can be seen by enlargement or by modification. Dots (130a) in a main screen (131) represent featured elements for a facial area. A user can also see the image by enlarging via a multi touch function.

Figure 6: A makeup guide content has featured elements per facial area and it is also possible to watch by enlarging each facial area or modifying it.

In a user interface (UI) configuration to see an enlargement of a makeup guide content (A), a selection button (135a, 135b, 135c, 135d, 135e) can be a button, which recognizes a facial area. For instance, the configuration can consist of an eye part for 1, a nose part for 2, a lip part for 3, a left cheek part for 4, a right cheek part for 5, a forehead part for 6 (not illustrated), a chin part for 7 (not illustrated), etc. When the user touches the button, then the corresponding part of the makeup guide content (A) is displayed through an enlargement.

In a user interface (UI) configuration, to see a modification of a makeup guide content (A), a selection button (135a, 135b, 135c, 135d, 135e) can be a different personal image classification item. For instance, a makeup guide content (A) displayed in a main screen (131) as a young impression for 1, a sexy impression for 2, an intellectual impression for 3, a cool impression for 4, a warm impression for 5, a natural impression (an impression that appears natural as if one did not apply any makeup) for 6 (not illustrated), a pretty impression for 7 (not illustrated), and etc. can be modified.

Also, the following selection button can be presented as a physiognomic personal image classification. The button can be changed as a physiognomic image for wealth, health, fortune, etc. Moreover, it can be a personal image classification depending on a situation. A suitable makeup guide content can be presented as a date for 1, an interview for 2, a party for 3, a professional look for 4, a travel looks for 5, etc.

At present, by referring to Figure 7, a schematic total process of the methods is explained again by summarization.

First, a user executes a makeup supporting application installed to a device (S100). The makeup supporting application can contain various functional modules. Amongst these modules, if the module for requesting makeup support is to be executed, a device and a service server can start to communicate with each other. Also, a processor controls a camera module equipped at the device and executes a camera application to orient a lens towards a user’s face. The users adjust their own facial locations properly so that the device’s camera photographs the user’s own face accordingly.

Figure 7: Makeup supporting methods

Then, by a camera module, the user’s personal image data is collected in real time (S110). In another embodiment, the S110 phase can be substituted with a request of the user’s image files that are stored in a memory.

A service server generates a makeup guide content in real time by applying previously determined or previously stored personal image rules according to a personal image classification chosen by a selected event via a device to the personal image data (S120). In another desirable embodiment, the previously stored standard personal image data (pattern image) as personal image rules can be superposed with the user’s facial image and mapped with it.

In the S120 phase, because the user’s personal image data sent to a service server are not different from each user, finally this embodiment can generate a personalized makeup guide content depending on the user. Moreover, even if a user is still the same person, the facial status can be different depending on the time and place, and it is possible to provide optimal personalized solutions by considering them.

Secondly, a makeup guide content to be generated is displayed by a screen of a makeup supporting application (S130).

Moreover, a device executing a makeup supporting method has a built-in camera module and an installed makeup supporting application that applies personal image rules depending on personal image classifications. And the device contains at least one processor communicating to a camera module, and at least one memory communicating to the processor. And in a memory as media capable of being read out a by computer, through the aforementioned instructions, a processor performs an operation that collects the user’s personal image data in real time by a camera module at the device, an operation that generates a makeup guide content by applying previously determined or previously stored personal image rules corresponding to the personal image classification chosen by a selected event to the personal image data, and an operation that displays the makeup guide content via a screen of the makeup supporting application screen.

Moreover, a makeup supporting method is to provide an optimally personalized solution that fits the user’s purposes. Therefore, by entering information like the skin type, the skin tone, the skin’s ageing status, the skin problems, etc. or even when recognizing it automatically, a makeup guide content can be modified.

A generated makeup guide content by the user’s requests can be collected, and it can be recorded through the databases of a service server. Also, collected data can be used to update personal image rules or to learn personal image rules for everyone.