{"id":6890,"date":"2026-01-28T19:27:31","date_gmt":"2026-01-28T10:27:31","guid":{"rendered":"https:\/\/new.krafton.ai\/?page_id=6890"},"modified":"2026-02-20T17:38:17","modified_gmt":"2026-02-20T08:38:17","slug":"application","status":"publish","type":"page","link":"https:\/\/krafton.ai\/en\/application\/","title":{"rendered":"Application"},"content":{"rendered":"[vc_row type=&#8221;full_width_background&#8221; full_screen_row_position=&#8221;middle&#8221; column_margin=&#8221;default&#8221; column_direction=&#8221;default&#8221; column_direction_tablet=&#8221;default&#8221; column_direction_phone=&#8221;default&#8221; bg_color=&#8221;#000000&#8243; scene_position=&#8221;center&#8221; top_padding=&#8221;5%&#8221; constrain_group_1=&#8221;yes&#8221; bottom_padding=&#8221;5%&#8221; constrain_group_2=&#8221;yes&#8221; constrain_group_8=&#8221;yes&#8221; text_color=&#8221;dark&#8221; text_align=&#8221;left&#8221; row_border_radius=&#8221;none&#8221; row_border_radius_applies=&#8221;bg&#8221; row_position_desktop=&#8221;default&#8221; row_position_tablet=&#8221;inherit&#8221; row_position_phone=&#8221;inherit&#8221; overflow=&#8221;visible&#8221; overlay_strength=&#8221;0.3&#8243; gradient_direction=&#8221;left_to_right&#8221; shape_divider_position=&#8221;bottom&#8221; bg_image_animation=&#8221;none&#8221; gradient_type=&#8221;default&#8221; shape_type=&#8221;&#8221;][vc_column column_padding=&#8221;padding-2-percent&#8221; column_padding_tablet=&#8221;inherit&#8221; column_padding_phone=&#8221;inherit&#8221; column_padding_position=&#8221;all&#8221; column_element_direction_desktop=&#8221;default&#8221; column_element_spacing=&#8221;default&#8221; desktop_text_alignment=&#8221;default&#8221; tablet_text_alignment=&#8221;default&#8221; phone_text_alignment=&#8221;default&#8221; background_color_opacity=&#8221;1&#8243; background_hover_color_opacity=&#8221;1&#8243; column_backdrop_filter=&#8221;none&#8221; font_color=&#8221;#FFFFFF&#8221; column_shadow=&#8221;none&#8221; column_border_radius=&#8221;none&#8221; column_link_target=&#8221;_self&#8221; column_position=&#8221;default&#8221; gradient_direction=&#8221;left_to_right&#8221; overlay_strength=&#8221;0.3&#8243; width=&#8221;1\/1&#8243; tablet_width_inherit=&#8221;default&#8221; animation_type=&#8221;default&#8221; bg_image_animation=&#8221;none&#8221; border_type=&#8221;simple&#8221; column_border_width=&#8221;none&#8221; column_border_style=&#8221;solid&#8221; column_padding_type=&#8221;default&#8221; content_layout=&#8221;default&#8221; gradient_type=&#8221;default&#8221;][nectar_responsive_text inherited_font_style=&#8221;default&#8221; text_color=&#8221;#FFFFFF&#8221; font_size_desktop=&#8221;48&#8243; font_line_height=&#8221;1.4&#8243; text_direction=&#8221;default&#8221; font_size_phone=&#8221;30&#8243;]<span style=\"font-family: pretendard-medium;\">Application<\/span>[\/nectar_responsive_text][divider line_type=&#8221;Full Width Line&#8221; line_thickness=&#8221;1&#8243; divider_color=&#8221;extra-color-1&#8243; divider_opacity=&#8221;56&#8243; custom_height=&#8221;-30&#8243;][divider line_type=&#8221;No Line&#8221; custom_height=&#8221;50&#8243;][nectar_responsive_text inherited_font_style=&#8221;default&#8221; font_size_desktop=&#8221;15&#8243; font_line_height=&#8221;1.4&#8243; text_direction=&#8221;default&#8221;]We present an example of applying AI ethics to research and services of KRAFTON AI.<br \/>\n[\/nectar_responsive_text][divider line_type=&#8221;No Line&#8221; custom_height=&#8221;50&#8243;][toggles style=&#8221;minimal&#8221; accordion=&#8221;true&#8221; accordion_starting_functionality=&#8221;default&#8221;][toggle color=&#8221;Extra-Color-3&#8243; heading_tag=&#8221;default&#8221; heading_tag_functionality=&#8221;default&#8221; title=&#8221;Case 1: Generating stylized facial images without gender or racial bias&#8221;][vc_row_inner column_margin=&#8221;default&#8221; column_direction=&#8221;default&#8221; column_direction_tablet=&#8221;default&#8221; column_direction_phone=&#8221;default&#8221; top_padding=&#8221;2%&#8221; left_padding_desktop=&#8221;5%&#8221; constrain_group_2=&#8221;yes&#8221; right_padding_desktop=&#8221;5%&#8221; text_align=&#8221;left&#8221; row_position=&#8221;default&#8221; row_position_tablet=&#8221;inherit&#8221; row_position_phone=&#8221;inherit&#8221; overflow=&#8221;visible&#8221; pointer_events=&#8221;all&#8221;][vc_column_inner column_padding=&#8221;padding-3-percent&#8221; column_padding_tablet=&#8221;inherit&#8221; column_padding_phone=&#8221;inherit&#8221; column_padding_position=&#8221;all&#8221; column_element_direction_desktop=&#8221;default&#8221; column_element_spacing=&#8221;default&#8221; desktop_text_alignment=&#8221;default&#8221; tablet_text_alignment=&#8221;default&#8221; phone_text_alignment=&#8221;default&#8221; background_color=&#8221;#1C1C1C&#8221; background_color_opacity=&#8221;1&#8243; background_hover_color_opacity=&#8221;1&#8243; column_backdrop_filter=&#8221;none&#8221; column_shadow=&#8221;none&#8221; column_border_radius=&#8221;10px&#8221; column_link_target=&#8221;_self&#8221; overflow=&#8221;visible&#8221; gradient_direction=&#8221;left_to_right&#8221; overlay_strength=&#8221;0.3&#8243; width=&#8221;1\/1&#8243; tablet_width_inherit=&#8221;default&#8221; animation_type=&#8221;default&#8221; bg_image_animation=&#8221;none&#8221; border_type=&#8221;simple&#8221; column_border_width=&#8221;none&#8221; column_border_style=&#8221;solid&#8221; column_padding_type=&#8221;default&#8221; content_layout=&#8221;default&#8221; gradient_type=&#8221;default&#8221;][nectar_responsive_text inherited_font_style=&#8221;default&#8221; font_size_desktop=&#8221;15&#8243; font_line_height=&#8221;1.4&#8243; text_direction=&#8221;default&#8221;]<strong>Background: Bias in generative image models<\/strong>\u00a0Training generative visual models on biased data or datasets lacking diversity can make the models struggle to accurately represent the full spectrum of human diversity, particularly race and gender. The imbalance in training data is a well-known issue from prior research (Reference : Maluleke, Vongani H., et al. \u201cStudying Bias in GANs through the Lens of Race.\u201d European Conference on Computer Vision (2022)) that often leads to models disproportionately generating outputs that reflect the majority group while omitting or inadequately representing minority groups. Most facial generation models rely on the commonly used training dataset FFHQ, which exhibits extreme bias: 69.2% of the dataset comprises Caucasian individuals, while only 4.2% represents Black individuals. This bias is further exacerbated during inference when techniques to enhance image quality, such as truncation, are applied.[\/nectar_responsive_text][image_with_animation image_url=&#8221;6169&#8243; image_size=&#8221;full&#8221; max_width=&#8221;100%&#8221; max_width_mobile=&#8221;default&#8221; animation_type=&#8221;entrance&#8221; animation=&#8221;None&#8221; animation_movement_type=&#8221;transform_y&#8221; hover_animation=&#8221;none&#8221; alignment=&#8221;center&#8221; border_radius=&#8221;none&#8221; box_shadow=&#8221;none&#8221; image_loading=&#8221;default&#8221;][divider line_type=&#8221;No Line&#8221; custom_height=&#8221;40&#8243;][nectar_responsive_text inherited_font_style=&#8221;default&#8221; font_size_desktop=&#8221;15&#8243; font_line_height=&#8221;1.4&#8243; text_direction=&#8221;default&#8221;]\n<div class=\"txt-box\">This study was conducted to eliminate bias in 2D image stylization models intended for an avatar generation API. The results were reviewed for application in the project which allows users to freely express their identity in the metaverse by enabling them to create stylized images corresponding to their desired gender and race. This nature of the project made it necessary to examine bias issues from a broader perspective compared to existing generative models. Thus, the study aimed to identify and address the following issues:<\/div>\n<div class=\"txt-box\">\n<ul>\n<li>1. Image quality: Degraded aesthetic quality in the style transfer results for minority groups<\/li>\n<li>2. Identity preservation: The output of style transfer for minority groups resembling the input photo less closely<\/li>\n<li>3.\u00a0Degree of style transfer: Style transfer to and from minority to majority groups converging towards the characteristics of the majority group.<\/li>\n<\/ul>\n<\/div>\n<div class=\"txt-box\"><strong>Problem Definition<\/strong>\u00a0The Avatar DL Team researched a fair model for generating stylized facial images without bias to address the abovementioned issues. (Definition of a fair model: A model that maintains the unique identity of the input image in the output image regardless of gender\/race combinations and which produces an output image where the quality of the result (stylized image) shows no statistically significant differences from the input image.)<\/div>\n<div class=\"txt-box\"><strong>Application<\/strong>\u00a0The primary cause of bias in facial image generation models is the bias in the aforementioned FFHQ dataset. This dataset consists of facial data randomly collected online, presenting legal and ethical issues when used to train models for commercial products. To address these concerns, the team utilized generative models free from licensing issues to create a diverse range of synthetic facial images. These were then used to build a large-scale training dataset with reduced gender and racial bias.<\/div>\n[\/nectar_responsive_text][image_with_animation image_url=&#8221;6170&#8243; image_size=&#8221;full&#8221; max_width=&#8221;100%&#8221; max_width_mobile=&#8221;default&#8221; animation_type=&#8221;entrance&#8221; animation=&#8221;None&#8221; animation_movement_type=&#8221;transform_y&#8221; hover_animation=&#8221;none&#8221; alignment=&#8221;center&#8221; border_radius=&#8221;none&#8221; box_shadow=&#8221;none&#8221; image_loading=&#8221;default&#8221;][nectar_responsive_text inherited_font_style=&#8221;default&#8221; font_size_desktop=&#8221;15&#8243; font_line_height=&#8221;1.4&#8243; text_direction=&#8221;default&#8221;]Based on the defined problem, the following criteria were set as benchmarks for evaluating the model. It was observed that using our proprietary dataset for training the model not only reduced the differences in metrics across input\/output demographics but also improved overall identity preservation and the quality metrics of the final stylized images. Additionally, to minimize evaluator bias in these metrics, the results from multiple evaluators were standardized through normalization. The study presents a novel approach to addressing the issue of biased datasets, aiming to create fairer AI models that allow people from diverse backgrounds to freely express the full spectrum of race and gender.[\/nectar_responsive_text][image_with_animation image_url=&#8221;6171&#8243; image_size=&#8221;full&#8221; max_width=&#8221;100%&#8221; max_width_mobile=&#8221;default&#8221; animation_type=&#8221;entrance&#8221; animation=&#8221;None&#8221; animation_movement_type=&#8221;transform_y&#8221; hover_animation=&#8221;none&#8221; alignment=&#8221;center&#8221; border_radius=&#8221;none&#8221; box_shadow=&#8221;none&#8221; image_loading=&#8221;default&#8221;][\/vc_column_inner][\/vc_row_inner][\/toggle][toggle color=&#8221;Extra-Color-3&#8243; heading_tag=&#8221;default&#8221; heading_tag_functionality=&#8221;change_html_tag&#8221; title=&#8221;Case 2: Toxic Filtering&#8221;][vc_row_inner column_margin=&#8221;default&#8221; column_direction=&#8221;default&#8221; column_direction_tablet=&#8221;default&#8221; column_direction_phone=&#8221;default&#8221; top_padding=&#8221;3%&#8221; left_padding_desktop=&#8221;5%&#8221; constrain_group_2=&#8221;yes&#8221; right_padding_desktop=&#8221;5%&#8221; text_align=&#8221;left&#8221; row_position=&#8221;default&#8221; row_position_tablet=&#8221;inherit&#8221; row_position_phone=&#8221;inherit&#8221; overflow=&#8221;visible&#8221; pointer_events=&#8221;all&#8221;][vc_column_inner column_padding=&#8221;padding-5-percent&#8221; column_padding_tablet=&#8221;inherit&#8221; column_padding_phone=&#8221;inherit&#8221; column_padding_position=&#8221;all&#8221; column_element_direction_desktop=&#8221;default&#8221; column_element_spacing=&#8221;default&#8221; desktop_text_alignment=&#8221;default&#8221; tablet_text_alignment=&#8221;default&#8221; phone_text_alignment=&#8221;default&#8221; background_color=&#8221;#1C1C1C&#8221; background_color_opacity=&#8221;1&#8243; background_hover_color_opacity=&#8221;1&#8243; column_backdrop_filter=&#8221;none&#8221; column_shadow=&#8221;none&#8221; column_border_radius=&#8221;10px&#8221; column_link_target=&#8221;_self&#8221; overflow=&#8221;visible&#8221; gradient_direction=&#8221;left_to_right&#8221; overlay_strength=&#8221;0.3&#8243; width=&#8221;1\/1&#8243; tablet_width_inherit=&#8221;default&#8221; animation_type=&#8221;default&#8221; bg_image_animation=&#8221;none&#8221; border_type=&#8221;simple&#8221; column_border_width=&#8221;none&#8221; column_border_style=&#8221;solid&#8221; column_padding_type=&#8221;default&#8221; content_layout=&#8221;default&#8221; gradient_type=&#8221;default&#8221;][nectar_responsive_text inherited_font_style=&#8221;default&#8221; font_size_desktop=&#8221;15&#8243; font_line_height=&#8221;1.4&#8243; text_direction=&#8221;default&#8221;]\n<div class=\"txt-box\">One of the most significant issues that can arise when operating game chats or other chatbot services is the potential of engagement with inappropriate expressions generated by users or the chatbots themselves. These inappropriate expressions can range from simple profanity to hate speech and discrimination related to politics, religion, and more, with meanings and forms that are diverse and constantly changing. The technology used to automatically identify and filter out such inappropriate sentences during conversations is known as \u201ctoxic filtering.\u201d<\/div>\n<div><\/div>\n<div class=\"txt-box\">Toxic filtering can preemptively address potential legal and ethical issues arising from inappropriate expressions. KRAFTON AI has experience in developing and utilizing a deep learning-based toxic filtering model. Below is a summary of the model development process and how the model was applied in data processing:<\/div>\n<div><\/div>\n<div class=\"txt-box\"><b>Training Data Construction<\/b>The data was assembled using publicly available hate speech datasets and additional similar datasets constructed internally. Each sentence was tagged according to predefined criteria for hate speech classification as shown below, and this tagged data was then used as training data. Guidelines were developed on how to tag each sentence based on these criteria, and actual data labeling was conducted accordingly.<\/div>\n[\/nectar_responsive_text][nectar_responsive_text inherited_font_style=&#8221;default&#8221; font_size_desktop=&#8221;15&#8243; font_line_height=&#8221;1.4&#8243; text_direction=&#8221;default&#8221;]<strong>Classification Criteria<\/strong>[\/nectar_responsive_text][image_with_animation image_url=&#8221;7613&#8243; image_size=&#8221;full&#8221; max_width=&#8221;100%&#8221; max_width_mobile=&#8221;default&#8221; animation_type=&#8221;entrance&#8221; animation=&#8221;None&#8221; animation_movement_type=&#8221;transform_y&#8221; hover_animation=&#8221;none&#8221; alignment=&#8221;center&#8221; border_radius=&#8221;none&#8221; box_shadow=&#8221;none&#8221; image_loading=&#8221;default&#8221;][divider line_type=&#8221;No Line&#8221; custom_height=&#8221;40&#8243;][nectar_responsive_text inherited_font_style=&#8221;default&#8221; font_size_desktop=&#8221;15&#8243; font_line_height=&#8221;1.4&#8243; text_direction=&#8221;default&#8221;]<strong>Toxic Filtering Model Training<\/strong><br \/>\nThe toxic filtering model was trained using a Language model. Initially, separate models were developed for a variety of PLMs to evaluate their performance, and then the best-performing language model was selected for further development. Continual tagging was also done based on the criteria established above to enrich the dataset with as much data as possible. After training, precise evaluation is crucial. For this purpose, separate evaluation sets for each case were constructed to assess the model\u2019s performance. The effectiveness of the model was examined for each type of profanity, identifying which types the model handled well and which it did not. The types for which performance was relatively low were targeted for additional data collection to enhance the model\u2019s accuracy and robustness in handling a broader spectrum of offensive content.<\/p>\n<p><strong>Real-world Application<\/strong><br \/>\nThe developed toxic filtering model was applied to conversation data so the data could be processed for use in building a chatbot. When language models are trained directly on data containing inappropriate expressions, they may unintentionally generate such expressions in actual conversations. Therefore, it was necessary to remove these inappropriate expressions from the training data first. The described model was employed to this end. Large-scale data had to be processed quickly, so we implemented distributed processing and inference optimization.[\/nectar_responsive_text][\/vc_column_inner][\/vc_row_inner][\/toggle][toggle color=&#8221;Extra-Color-3&#8243; heading_tag=&#8221;default&#8221; heading_tag_functionality=&#8221;change_html_tag&#8221; title=&#8221;Case 3: PII Filtering&#8221;][vc_row_inner column_margin=&#8221;default&#8221; column_direction=&#8221;default&#8221; column_direction_tablet=&#8221;default&#8221; column_direction_phone=&#8221;default&#8221; top_padding=&#8221;3%&#8221; left_padding_desktop=&#8221;5%&#8221; constrain_group_2=&#8221;yes&#8221; right_padding_desktop=&#8221;5%&#8221; text_align=&#8221;left&#8221; row_position=&#8221;default&#8221; row_position_tablet=&#8221;inherit&#8221; row_position_phone=&#8221;inherit&#8221; overflow=&#8221;visible&#8221; pointer_events=&#8221;all&#8221;][vc_column_inner column_padding=&#8221;padding-3-percent&#8221; column_padding_tablet=&#8221;inherit&#8221; column_padding_phone=&#8221;inherit&#8221; column_padding_position=&#8221;all&#8221; column_element_direction_desktop=&#8221;default&#8221; column_element_spacing=&#8221;default&#8221; desktop_text_alignment=&#8221;default&#8221; tablet_text_alignment=&#8221;default&#8221; phone_text_alignment=&#8221;default&#8221; background_color=&#8221;#1C1C1C&#8221; background_color_opacity=&#8221;1&#8243; background_hover_color_opacity=&#8221;1&#8243; column_backdrop_filter=&#8221;none&#8221; column_shadow=&#8221;none&#8221; column_border_radius=&#8221;10px&#8221; column_link_target=&#8221;_self&#8221; overflow=&#8221;visible&#8221; gradient_direction=&#8221;left_to_right&#8221; overlay_strength=&#8221;0.3&#8243; width=&#8221;1\/1&#8243; tablet_width_inherit=&#8221;default&#8221; animation_type=&#8221;default&#8221; bg_image_animation=&#8221;none&#8221; border_type=&#8221;simple&#8221; column_border_width=&#8221;none&#8221; column_border_style=&#8221;solid&#8221; column_padding_type=&#8221;default&#8221; content_layout=&#8221;default&#8221; gradient_type=&#8221;default&#8221;][nectar_responsive_text inherited_font_style=&#8221;default&#8221; font_size_desktop=&#8221;15&#8243; font_line_height=&#8221;1.4&#8243; text_direction=&#8221;default&#8221;]PII (Personally Identifiable Information) refers to information that can directly or indirectly identify an individual. KRAFTON AI is actively engaged in PII filtering across various projects to secure and utilize data without privacy risks. PII filtering is a crucial task aimed at minimizing the risk of personal data breaches and maintaining smooth development and high service quality.[\/nectar_responsive_text][nectar_responsive_text inherited_font_style=&#8221;default&#8221; font_size_desktop=&#8221;15&#8243; font_line_height=&#8221;1.4&#8243; text_direction=&#8221;default&#8221;]<strong>Process<\/strong>[\/nectar_responsive_text][image_with_animation image_url=&#8221;7614&#8243; image_size=&#8221;full&#8221; max_width=&#8221;100%&#8221; max_width_mobile=&#8221;default&#8221; animation_type=&#8221;entrance&#8221; animation=&#8221;None&#8221; animation_movement_type=&#8221;transform_y&#8221; hover_animation=&#8221;none&#8221; alignment=&#8221;center&#8221; border_radius=&#8221;none&#8221; box_shadow=&#8221;none&#8221; image_loading=&#8221;default&#8221;][divider line_type=&#8221;No Line&#8221; custom_height=&#8221;40&#8243;][nectar_responsive_text inherited_font_style=&#8221;default&#8221; font_size_desktop=&#8221;15&#8243; font_line_height=&#8221;1.4&#8243; text_direction=&#8221;default&#8221;]1. Risk analysis: This step involves a review of copyright issues, source verification, inclusion of personal information, and ethics.<br \/>\n2. Anonymization: If this step is applied, an automation tool is used to filter the data. Data loss prevention API is used to detect sensitive information (anonymized by defining 20+ patterns that can help identify an individual) and convert it to tokens predefined internally.<br \/>\n3. Re-verification: The filtered data is cross-checked to verify whether it is fit for deployment.<br \/>\n4. Continuous monitoring: Continuous monitoring is employed to enhance the visibility of risks.<br \/>\n5. Post-management: The data is stored in an access-controlled database with minimal personnel involved.<\/p>\n<p><strong>Additional Measures:<\/strong><br \/>\n1. When acquiring data externally, we analyze risks using the same criteria (such as verifying the source of open-source data, checking for copyright issues, assessing whether personal information is included, and reviewing ethical concerns). Additionally, we determine whether the data subject\u2019s consent is required or if it is unnecessary.<br \/>\n2. During data collection, we ensure that unnecessary information is not collected and provide guidance to anonymize personal information during the data storage process.<br \/>\n3. For sentences generated by generative models, such as large language models, we perform PII filtering or regenerate outputs to ensure that there are no issues.<br \/>\n4. Data containing personal information is stored in a separate storage accessible only by the data privacy manager, with access restrictions in place. All access to the restricted storage is logged.<\/p>\n<p>In addition to the measures described above, KRAFTON AI has continuously reviewed and improved its current data processing systems based on the expertise of its internal privacy team and regularly ensures compliance with relevant regulations and industry standards.[\/nectar_responsive_text][\/vc_column_inner][\/vc_row_inner][\/toggle][\/toggles][divider line_type=&#8221;No Line&#8221; custom_height=&#8221;100&#8243;][\/vc_column][\/vc_row]\n","protected":false},"excerpt":{"rendered":"<p>[vc_row type=&#8221;full_width_background&#8221; full_screen_row_position=&#8221;middle&#8221; column_margin=&#8221;default&#8221; column_direction=&#8221;default&#8221; column_direction_tablet=&#8221;default&#8221; column_direction_phone=&#8221;default&#8221; bg_color=&#8221;#000000&#8243; scene_position=&#8221;center&#8221; top_padding=&#8221;5%&#8221; constrain_group_1=&#8221;yes&#8221; bottom_padding=&#8221;5%&#8221; constrain_group_2=&#8221;yes&#8221; constrain_group_8=&#8221;yes&#8221; text_color=&#8221;dark&#8221; text_align=&#8221;left&#8221; row_border_radius=&#8221;none&#8221; row_border_radius_applies=&#8221;bg&#8221; row_position_desktop=&#8221;default&#8221; row_position_tablet=&#8221;inherit&#8221; row_position_phone=&#8221;inherit&#8221; overflow=&#8221;visible&#8221; overlay_strength=&#8221;0.3&#8243; gradient_direction=&#8221;left_to_right&#8221; shape_divider_position=&#8221;bottom&#8221; bg_image_animation=&#8221;none&#8221; gradient_type=&#8221;default&#8221; shape_type=&#8221;&#8221;][vc_column column_padding=&#8221;padding-2-percent&#8221; column_padding_tablet=&#8221;inherit&#8221;&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":2,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-6890","page","type-page","status-publish"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Application - Krafton AI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/krafton.ai\/en\/application\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Application - Krafton AI\" \/>\n<meta property=\"og:description\" content=\"[vc_row type=&#8221;full_width_background&#8221; full_screen_row_position=&#8221;middle&#8221; column_margin=&#8221;default&#8221; column_direction=&#8221;default&#8221; column_direction_tablet=&#8221;default&#8221; column_direction_phone=&#8221;default&#8221; bg_color=&#8221;#000000&#8243; scene_position=&#8221;center&#8221; top_padding=&#8221;5%&#8221; constrain_group_1=&#8221;yes&#8221; bottom_padding=&#8221;5%&#8221; constrain_group_2=&#8221;yes&#8221; constrain_group_8=&#8221;yes&#8221; text_color=&#8221;dark&#8221; text_align=&#8221;left&#8221; row_border_radius=&#8221;none&#8221; row_border_radius_applies=&#8221;bg&#8221; row_position_desktop=&#8221;default&#8221; row_position_tablet=&#8221;inherit&#8221; row_position_phone=&#8221;inherit&#8221; overflow=&#8221;visible&#8221; overlay_strength=&#8221;0.3&#8243; gradient_direction=&#8221;left_to_right&#8221; shape_divider_position=&#8221;bottom&#8221; bg_image_animation=&#8221;none&#8221; gradient_type=&#8221;default&#8221; shape_type=&#8221;&#8221;][vc_column column_padding=&#8221;padding-2-percent&#8221; column_padding_tablet=&#8221;inherit&#8221;...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/krafton.ai\/en\/application\/\" \/>\n<meta property=\"og:site_name\" content=\"Krafton AI\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-20T08:38:17+00:00\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"12 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/krafton.ai\\\/en\\\/application\\\/\",\"url\":\"https:\\\/\\\/krafton.ai\\\/en\\\/application\\\/\",\"name\":\"Application - Krafton AI\",\"isPartOf\":{\"@id\":\"http:\\\/\\\/172.31.17.166\\\/#website\"},\"datePublished\":\"2026-01-28T10:27:31+00:00\",\"dateModified\":\"2026-02-20T08:38:17+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/krafton.ai\\\/en\\\/application\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/krafton.ai\\\/en\\\/application\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/krafton.ai\\\/en\\\/application\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"\ud648\",\"item\":\"https:\\\/\\\/www.krafton.ai\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Application\"}]},{\"@type\":\"WebSite\",\"@id\":\"http:\\\/\\\/172.31.17.166\\\/#website\",\"url\":\"http:\\\/\\\/172.31.17.166\\\/\",\"name\":\"Krafton AI\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"http:\\\/\\\/172.31.17.166\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Application - Krafton AI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/krafton.ai\/en\/application\/","og_locale":"en_US","og_type":"article","og_title":"Application - Krafton AI","og_description":"[vc_row type=&#8221;full_width_background&#8221; full_screen_row_position=&#8221;middle&#8221; column_margin=&#8221;default&#8221; column_direction=&#8221;default&#8221; column_direction_tablet=&#8221;default&#8221; column_direction_phone=&#8221;default&#8221; bg_color=&#8221;#000000&#8243; scene_position=&#8221;center&#8221; top_padding=&#8221;5%&#8221; constrain_group_1=&#8221;yes&#8221; bottom_padding=&#8221;5%&#8221; constrain_group_2=&#8221;yes&#8221; constrain_group_8=&#8221;yes&#8221; text_color=&#8221;dark&#8221; text_align=&#8221;left&#8221; row_border_radius=&#8221;none&#8221; row_border_radius_applies=&#8221;bg&#8221; row_position_desktop=&#8221;default&#8221; row_position_tablet=&#8221;inherit&#8221; row_position_phone=&#8221;inherit&#8221; overflow=&#8221;visible&#8221; overlay_strength=&#8221;0.3&#8243; gradient_direction=&#8221;left_to_right&#8221; shape_divider_position=&#8221;bottom&#8221; bg_image_animation=&#8221;none&#8221; gradient_type=&#8221;default&#8221; shape_type=&#8221;&#8221;][vc_column column_padding=&#8221;padding-2-percent&#8221; column_padding_tablet=&#8221;inherit&#8221;...","og_url":"https:\/\/krafton.ai\/en\/application\/","og_site_name":"Krafton AI","article_modified_time":"2026-02-20T08:38:17+00:00","twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/krafton.ai\/en\/application\/","url":"https:\/\/krafton.ai\/en\/application\/","name":"Application - Krafton AI","isPartOf":{"@id":"http:\/\/172.31.17.166\/#website"},"datePublished":"2026-01-28T10:27:31+00:00","dateModified":"2026-02-20T08:38:17+00:00","breadcrumb":{"@id":"https:\/\/krafton.ai\/en\/application\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/krafton.ai\/en\/application\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/krafton.ai\/en\/application\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"\ud648","item":"https:\/\/www.krafton.ai\/en\/"},{"@type":"ListItem","position":2,"name":"Application"}]},{"@type":"WebSite","@id":"http:\/\/172.31.17.166\/#website","url":"http:\/\/172.31.17.166\/","name":"Krafton AI","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"http:\/\/172.31.17.166\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/krafton.ai\/en\/wp-json\/wp\/v2\/pages\/6890","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/krafton.ai\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/krafton.ai\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/krafton.ai\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/krafton.ai\/en\/wp-json\/wp\/v2\/comments?post=6890"}],"version-history":[{"count":5,"href":"https:\/\/krafton.ai\/en\/wp-json\/wp\/v2\/pages\/6890\/revisions"}],"predecessor-version":[{"id":7615,"href":"https:\/\/krafton.ai\/en\/wp-json\/wp\/v2\/pages\/6890\/revisions\/7615"}],"wp:attachment":[{"href":"https:\/\/krafton.ai\/en\/wp-json\/wp\/v2\/media?parent=6890"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}