Using generative AI guided by existing datasets and emotion taxonomies, we generated 847 images and their corresponding descriptions across 12 discrete emotions, and then iteratively refined them with local cultural experts. We validated the library through six studies (total n = 2,470; 58 countries). Participants rated five types of images: (1) images from existing affective databases, (2) AI-generated images without cultural adjustments, and (3) AI-generated images adjusted to specific cultural contexts, (4) AI-generated images adjusted by sex (male, female), and (5) AI-generated images adjusted by age group (childhood, adulthood, older age). The AI-generated images were as effective in eliciting affective responses as the images from existing affective databases. Culturally adjusted images were slightly more effective than unadjusted counterparts in targeting intended emotions. Sex- and age-adjusted variants produced comparable responses to their base images, demonstrating controllability without loss of affective impact.
The paper describing the methodology and motivation in detail: https://osf.io/v8dkm/files/8t34y
The dataset is stored on Open Science Framework Repository: https://doi.org/10.17605/OSF.IO/V8DKM
LAI‑GAI Research Team (AMU + international collaborators)
National Science Centre in Poland (UMO-2020/39/B/HS6/00685; 2023/49/B/HS5/02139) and Excellence Initiative—Research University (ID-UB) program at Adam Mickiewicz University, Poznan (140/04/POB5/0001, 151/12/POB5/0005, 174/12/POB5/0001, 177/02/UAM/0012, 181/13/SNS/0003, 198/12/POB5/0007) supported preparing this article with a research grant. The funders had no role in study design, data collection, analysis, publishing decisions, or manuscript preparation.
LAI-GAI is the unified name for this project—methods, images, annotations, and app—released here as a single, versioned resource.
Category | Data
--- | ---
Number of Instances | 847 images
Target Emotions | 12 (amusement, awe, anger, attachment love, craving, disgust, excitement, fear, joy, neutral, nurturant love, sadness)
Cultural Contexts | 6 (African, Arabic, Asian, Indian, South/Central American, European/North American)
Sex Variants | Sex variants consisted of matched male–female pairs for all applicable emotion categories (excluding Craving and Awe, which did not depict humans).
Age Variants | Age variants consisted of matched triplets (minors, middle-aged adults, older adults) for all emotion categories,
except Nurturant Love, where depicting minors is inherent; for that category, we generated matched pairs (middle-aged adults, older adults) instead of triplets.
Validation Participants | 2,470 across 58 countries (6 studies)
Human Annotations | 12 discrete emotions (listed above) + six dimensional affect measures (positive, negative, arousal, calmness, approach, avoidance)
Each data point is an AI-generated, photorealistic image with metadata (prompt, target emotion). Each image was rated by humans on 18 Likert-type scales (1–7): 12 discrete emotions (amusement, awe, anger, attachment love, craving, disgust, excitement, fear, joy, neutral, nurturant love, sadness) and 6 dimensional measures (positive, negative, arousal, calmness, approach, avoidance). For full details on the validation procedure and methodology, see our paper: [https://osf.io/v8dkm/files/8t34y].
Explorer app: We provide (or plan to provide) an online browser to filter and preview images by emotion, culture, sex/age variants, and success index. Access: [https://www.affectdatabases.amu.edu.pl/].
Note: The current/updated link is always listed on the OSF project page: [https://doi.org/10.17605/OSF.IO/V8DKM].
Creation Pipeline:
LAI-GAI size: 7.21 GB
Image name convention:
| Code | Meaning |
|---|---|
| euna | Europe / North America |
| af | Africa |
| as | Asia (pan-Asian) |
| ind | India |
| arab | Arabic-speaking regions |
| sa | Latin America |
| Code | Meaning |
|---|---|
| sex1 | Female (depicted sex) |
| sex2 | Male (depicted sex) |
Note. We use sex-adjusted visual depictions (male/female). These codes do not capture gender identity.
| Code | Meaning |
|---|---|
| age1 | Minors (childhood) |
| age2 | Middle-aged adults |
| age3 | Older adults |
Examples:
Affective science, Emotion elicitation, Experimental psychology, Cross-cultural psychology, HCI, Behavioral science, Psychophysiology, Methods/measurement
Additional Notes: Some stimuli are intentionally aversive (e.g., disgust) to ensure strong affect; use content warnings and follow IRB/ethics guidance.
Additional Notes: Images depict synthetic persons; still, abide by local content rules and avoid combining with real-person data to infer identity.
The dataset addresses the need for modern, controllable, and transparent affective stimuli. It tests whether AI-generated images can match or exceed legacy databases in eliciting target emotions, quantifies the benefits of cultural matching, and provides sex/age variants with validation across large, international samples. Core questions include: (1) How well do AI-generated stimuli elicit specific emotions? (2) Do culturally matched images improve targeted affect? (3) How large is a practically meaningful difference between two affect-elicitation stimuli?
📦 Library of AI-Generated Affective Images (LAI-GAI) [v8dkm]
└─ 🗄️ storage: osfstorage/
├─ Authors_Roles_preregistration.docx
├─ Ms_AI_Affective_Images_20062025.pdf
├─ Ms_AI_Affective_Images_30112025.docx
├─ Ms_AI_Affective_Images_Supp20062025.docx
├─ Ms_AI_Affective_Images_Supp_30112025.docx
├─ Random_order_preregistration.Rmd
└─ Results Mock_preregistration.docx
├─ 🔗 component: k8xvh
│ 📦 Images [k8xvh]
│ └─ 🗄️ storage: osfstorage/
│ ├─ 🔗 component: hcv6q
│ │ 📦 All_Generated_Images [hcv6q]
│ │ └─ 🗄️ storage: osfstorage/
│ │ └─ all_Images_generated.zip
│ ├─ 🔗 component: qgney
│ │ 📦 Images_Study2 [qgney]
│ │ └─ 🗄️ storage: osfstorage/
│ │ ├─ S2_scaled.zip
│ │ └─ S2_single.zip
│ ├─ 🔗 component: 6yc73
│ │ 📦 Images_Study3 [6yc73]
│ │ └─ 🗄️ storage: osfstorage/
│ │ ├─ S3_4s.zip
│ │ ├─ S3_Pairs.zip
│ │ ├─ S3_Scaled.zip
│ │ └─ S3_Single.zip
│ ├─ 🔗 component: 4xbta
│ │ 📦 Generated_Images_Not_Used [4xbta]
│ │ └─ 🗄️ storage: osfstorage/
│ │ └─ generated_but_not_used_in_studies.zip
│ ├─ 🔗 component: z6epq
│ │ 📦 Images_Study4 [z6epq]
│ │ └─ 🗄️ storage: osfstorage/
│ │ ├─ S4_scaled.zip
│ │ └─ S4_single.zip
│ ├─ 🔗 component: 9m8ky
│ │ 📦 Images_Study5 [9m8ky]
│ │ └─ 🗄️ storage: osfstorage/
│ │ ├─ S5_scaled.zip
│ │ └─ S5_single.zip
│ └─ 🔗 component: yj986
│ 📦 Images_Study6 [yj986]
│ └─ 🗄️ storage: osfstorage/
│ ├─ S6_scaled.zip
│ └─ S6_single.zip
└─ 🔗 component: 8p572
📦 Data, Analysis Code & Outputs [8p572]
└─ 🗄️ storage: osfstorage/
├─ .RData
├─ .Rhistory
├─ 1README.pdf
├─ AIPS_preprocv3.ipynb
├─ AIPS_preprocv3_all.ipynb
├─ AIPS_preprocv3_s2.ipynb
├─ AIPS_preprocv3_s3.ipynb
├─ AIPS_preprocv3_s4.ipynb
├─ AIPS_preprocv3_s5.ipynb
├─ AIPS_preprocv3_s6.ipynb
├─ comparison_correlations.csv
├─ comparison_diff_stats.csv
├─ comparison_diff_stats_all.csv
├─ comparison_diff_stats_culture.csv
├─ comparison_response_distribution_summary.csv
├─ comparison_summary.csv
├─ image_emotion_means_S1.csv
├─ image_emotion_means_S123.csv
├─ image_emotion_means_S1233b45.csv
├─ image_emotion_means_S1233b45_2.csv
├─ image_emotion_means_S1233b45_emotion_corr_sorted.csv
├─ image_emotion_means_S123456.csv
├─ image_emotion_means_S123456_desc.csv
├─ image_emotion_means_S123456_emotion_corr_sorted.csv
├─ image_emotion_means_S123_desc.csv
├─ image_emotion_means_S123_emotion_corr_sorted.csv
├─ image_emotion_means_S2.csv
├─ image_emotion_means_S3.csv
├─ image_emotion_means_S3b.csv
├─ image_emotion_means_S4.csv
├─ image_emotion_means_S5.csv
├─ image_emotion_means_S6.csv
├─ image_emotion_means_updated.csv
├─ MMA_model_power.Rmd
├─ MMA_model_S1.Rmd
├─ MMA_model_S3.Rmd
├─ MMA_model_S34.Rmd
├─ MMA_model_S5.Rmd
├─ MMA_model_S6.Rmd
├─ osf_tree.ipynb
├─ prompt_dict.csv
├─ radar_by_category_with_image.zip
├─ S1233b45_data_out.csv
├─ S1233b4_data_out.csv
├─ S1233b_data_out.csv
├─ S12345_data_out.csv
├─ S1234_data_out.csv
├─ S123_data_out.csv
├─ S12_data_out.csv
├─ S13_data_with_cult_match_recoded.csv
├─ S1_1_620.txt
├─ S1_AI_position.csv
├─ S1_comparison_distribution.csv
├─ S1_comparison_means_adjusted.csv
├─ S1_data.csv
├─ S1_data_mlm.csv
├─ S1_data_mlm_2.csv
├─ S1_data_out.csv
├─ S1_data_out_recoded.csv
├─ S1_data_with_diff.csv
├─ S1_pilot.txt
├─ S1_pilot_data.csv
├─ S2_1_280.txt
├─ S2_281_300.txt
├─ S2_data.csv
├─ S2_data_1.csv
├─ S2_data_2.csv
├─ S2_data_out.csv
├─ S2_pilot0.txt
├─ S2_pilot01.txt
├─ S2_pilot1.txt
├─ S2_pilot_data.csv
├─ S2_pilot_datax.csv
├─ S34_data_mlm.csv
├─ S34_data_out.csv
├─ S3_AF_data.csv
├─ S3_AF_pilot.txt
├─ S3_AS_data.csv
├─ S3_AS_pilot.txt
├─ S3_comparison_distribution.csv
├─ S3_comparison_means_adjusted.csv
├─ S3_data.csv
├─ S3_data_mlm.csv
├─ S3_data_out.csv
├─ S3_data_out_recoded.csv
├─ S3_data_with_indiv_diff.csv
├─ S3_EUNA_data.csv
├─ S3_EUNA_pilot.txt
├─ S3_SA_data.csv
├─ S3_SA_pilot.txt
├─ S4_data.csv
├─ S4_data1.csv
├─ S4_data2.csv
├─ S4_data3.csv
├─ S4_data4.csv
├─ S4_data_out.csv
├─ S5_data.csv
├─ S5_data1.csv
├─ S5_data3.csv
├─ S5_data4.csv
├─ S5_data_mlm.csv
├─ S5_data_out.csv
├─ S6_data.csv
├─ S6_data1.csv
├─ S6_data1.csv
├─ S6_data1.csv
├─ S6_data1.csv
├─ S6_data1.csv
├─ S6_data1.csv
├─ S6_data1.csv
├─ S6_data2.csv
├─ S6_data_mlm.csv
├─ S6_data_out.csv
├─ study4_pilot.txt
├─ study4_pilot2.txt
├─ study5_pilot.txt
├─ study5_pilot2.txt
├─ study5_pilot3.txt
├─ study6_pilot.txt
├─ study6_pilot2.txt
├─ SupplementaryData_30112025.xlsx
├─ Table 2.csv
├─ Table 2_1.csv
├─ Table 3.xlsx
├─ Table_2_age.csv
├─ Table_2_culture_matched.csv
├─ Table_2_gender.csv
├─ Table_age_groups.csv
├─ Table_overall_stats.csv
├─ Table_sex1_vs_sex2.csv
├─ target_emotions.csv
├─ target_emotions0.csv
├─ target_emotions2.csv
├─ targeted_comparison_stats.csv
├─ targeted_comparison_stats_culture.csv
├─ targeted_comparison_stats_S1.csv
├─ targeted_comparison_stats_S3.csv
├─ targeted_comparison_stats_S33b.csv
├─ targeted_comparison_stats_S34.csv
├─ targeted_comparison_stats_S3b.csv
├─ targeted_comparison_stats_S4.csv
├─ targeted_emotion_with_max.csv
└─ targeted_summary_by_emotion.csv
{
"participantID": "P003",
"set_number": 7,
"consent": "YES",
"page_number": 19,
"Amusement": 2,
"Awe": 5,
"Anger": 1,
"Attachment_love": 4,
"Craving": 1,
"Disgust": 1,
"Excitement": 2,
"Fear": 1,
"Joy": 6,
"Neutral": 3,
"Nurturant_love": 5,
"Sadness": 1,
"Positive": 7,
"Negative": 1,
"Aroused": 1,
"Calm": 5,
"Approach": 4,
"Avoid": 1,
"Image_name": "posmidsoc_25.jpg",
"Target_Name": null,
"Comp_Target": null,
"Comp_Pos": null,
"Comp_Neg": null,
"Comp_Aro": null,
"Comp_Calm": null,
"CompImage_name": null,
"q1": 2,
"q2": 4,
"q3": 2,
"q4": 2,
"q5": 1,
"age": 39,
"gender": "Female",
"country": "United Kingdom",
"device": "laptop",
"useData": "Yes",
"prolific_id": "57e508cec3e5930001447391",
"Duration": 2856552.019,
"date_of_completion": "2025-03-17T19:26:21.449000",
"target_emo_ind_name": "Nurturant love",
"target_emo_ind_score": 5,
"target_emo_comp_name": null,
"is_AI": 0,
"target_val": 7,
"target_aro": 1,
"target_mot": 4,
"is_positive": 1,
"is_arousing": 1,
"is_approach": 1,
"rating_cat": 0,
"fail_att_check": 0,
"fail_att_check_2": 0,
"is_careless": 0,
"is_careless_2": 0,
"culture": "Europe and North America",
"image_culture": "Other",
"culture_matched": 0
}
Identifiers & session
participantID — anonymized respondent code (no PII).set_number — stimulus set/block assigned to the participant (for counterbalancing).consent — explicit consent flag (YES/NO).page_number — page or trial index within the survey/task.date_of_completion — ISO 8601 timestamp (UTC) for the submission.Duration — time to complete in miliseconds.device — participant’s device type (e.g., laptop, mobile, desktop).useData — data-use permission flag (e.g., “Yes” to include in analyses/data release).prolific_id — platform pseudonymous ID;Stimulus info
Image_name — stimulus filename shown on the trial.Target_Name — intended discrete emotion category for the target image (may be blank if not applicable on that trial).is_AI — provenance of the specific image shown (1 = AI-generated, 0 = non-AI/legacy, if present in comparisons).culture — participant’s cultural group label (derived from country or self-report, only for Study 3 & 4).image_culture — cultural context label for the image (e.g., “Other”, “Europe and North America”, only for Study 3 & 4).culture_matched — whether participant culture matches image culture (1 matched, 0 unmatched).Ratings (per trial; 1–7 Likert)
Amusement, Awe, Anger, Attachment_love, Craving, Disgust, Excitement, Fear, Joy, Neutral, Nurturant_love, SadnessPositive, Negative, Aroused, Calm, Approach, AvoidTarget anchors & polarity flags (design/validation)
target_val, target_aro, target_mot — copy of target dimensional scale rating.is_positive, is_arousing, is_approach — binary flags for the intended polarity of the stimulus on each dimension (1 = yes, 0 = no).Comparison task (only on comparison trials)
Comp_Target — intended category of the comparison image.Comp_Pos, Comp_Neg, Comp_Aro, Comp_Calm — target dimensional anchors for the comparison image.CompImage_name — filename of the comparison image presented alongside the target.Target_Name / target_emo_ind_name / target_emo_comp_name — name of the targeted emotion for given image (fields may be blank if that task was not shown on the trial).target_emo_ind_score — strength/score for the chosen target category on that trial (often 1–7).Quality control & meta
q1–q5 — attitudes toward emotional AI items (1-7 Likert): We used the following items: ”I believe emotional AI can effectively understand and respond to human emotions.", "I feel comfortable interacting with systems that use emotional AI to interpret my feelings.", "Emotional AI has the potential to improve human well-being by providing emotional support.", "I trust emotional AI to accurately identify and react to my emotional state.", "I am excited about the possibilities of integrating emotional AI into my daily life.".fail_att_check, fail_att_check_2 — attention check outcomes (1 = failed, 0 = passed).is_careless, is_careless_2 — careless responding indicators (algorithmic or self-report; 1 = flagged).rating_cat — rating category: 0 - single image, 1 - image comparison, 2 - baseline measure, 3 - attention check (project-specific coding).Demographics
age — participant age in years.gender — self-reported gender (string).country — country name.{
"image_name": "amusement_new_10.jpg",
"amusement_mean": 5.214285714,
"awe_mean": 3.657142857,
"anger_mean": 1.217391304,
"attachment_love_mean": 4.142857143,
"craving_mean": 2.428571429,
"disgust_mean": 1.2,
"excitement_mean": 3.871428571,
"fear_mean": 1.285714286,
"joy_mean": 5.042253521,
"neutral_mean": 2.971428571,
"nurturant_love_mean": 4.171428571,
"sadness_mean": 1.142857143,
"positive_mean": 5.85915493,
"negative_mean": 1.314285714,
"aroused_mean": 2.714285714,
"calm_mean": 4.2,
"approach_mean": 5.214285714,
"avoid_mean": 1.514285714,
"target_emo_ind_score_mean": 5.214285714,
"amusement_std": 1.776602009,
"awe_std": 2.166180866,
"anger_std": 0.638692719,
"attachment_love_std": 2.094061405,
"craving_std": 2.06117617,
"disgust_std": 0.67243878,
"excitement_std": 1.962853798,
"fear_std": 0.853685228,
"joy_std": 1.710777766,
"neutral_std": 1.970591235,
"nurturant_love_std": 2.035706113,
"sadness_std": 0.490066966,
"positive_std": 1.245515093,
"negative_std": 0.808341604,
"aroused_std": 1.866119411,
"calm_std": 1.938174847,
"approach_std": 1.824891159,
"avoid_std": 1.138812719,
"target_emo_ind_score_std": 1.776602009,
"n": 71,
"target_emotion": "Amusement",
"is_positive": true,
"is_arousing": true,
"is_approach": true,
"target_val": 5.85915493,
"target_aro": 2.714285714,
"target_mot": 5.214285714,
"is_efficient": false,
"is_max": true,
"is_ai": true,
"used_in_study": 2,
"culture": "",
"prompt_gpt": "A photograph of a family pet doing something funny, like a dog wearing sunglasses and a hat. The background shows a cozy living room with a couch and family photos on the wall. Bright, warm lighting, playful details, hd quality, natural look"
}
*_mean, *_std — per-image mean/SD on 18 scales (12 discrete emotions + 6 dimensions).n - number of human raters contributing to the per-image aggregates.target_emotion — intended discrete category for the image.is_positive, is_arousing, is_approach — binary flags for targeted valence/arousal/motivation.target_val, target_aro, target_mot — copy of target dimensional scale rating.is_efficient, is_max — selection flags (e.g., meets inclusion thresholds / top performer).is_ai — image provenance flag (AI-generated).used_in_study — study index/code if used in validation.culture — cultural context label.prompt_gpt — human-readable prompt/description used for generation.Additional Notes: These are stimulus labels (what the image depicts), not participant attributes and not PII.
Field Name (latent) | Description ---|--- Attractiveness / body type | Perceived attractiveness, weight, or body shape may be inferred from depictions. Socio-economic status | Implied via housing, clothing, objects (e.g., luxury goods). Religion or tradition | Implied via clothing, symbols, or settings. Disability status | Presence/absence of assistive devices or visible impairments (rare). Language/nationality | Implied by signage, scenery, or stereotypical cues.
Additional Notes: These attributes are not annotated and should not be treated as ground truth.
Source(s)
Human Attribute Method (for stimulus labels)
Human Attribute: Culture
changes.txt.Human Attribute: Age (depictions of minors)
Human Attribute: Sex presentation
Method(s) Used
Methodology Detail(s)
Collection Type — Artificial Generation
Collection Type — Crowdsourced Ratings
Collection Type — Prior Datasets (concept seeds)
Description of Methods employed - Prompt drafting → image generation → human curation/edits → expert cultural review → re-edits (if needed) → rating studies → QC/exclusions → per-image aggregates.
Tools or libraries - Image platforms (e.g., Midjourney/Freepik/others as noted); survey platform; statistical scripts/notebooks for aggregation.
Additional Notes - Some images underwent artifact corrections (hands/faces/composition) while preserving the target affect.
Model drift & moving targets. Generator capabilities, interfaces, and policies change rapidly; results validated today may shift as models evolve. This can reduce the longevity of benchmarks and may require periodic re-validation or a pivot toward validating pipelines rather than static sets.
Content filters constrain negative stimuli. Platform safety filters limited the creation of disturbing content (e.g., injuries, violence, controversial symbols), making it harder to elicit strong sadness/anger/negative affect compared to legacy datasets—though newer, less restrictive models improved feasibility for moderate negatives (e.g., disgust), which still warrant further validation.
Generation artifacts & realism gaps. Common issues included text errors in images, inconsistent context (e.g., anatomy like hands), low-resolution outputs, and difficulty rendering natural-looking faces in larger groups; additional prompt iterations and edits were often needed.
Attractiveness bias. Models tended to render uniformly attractive faces, making it challenging to generate less-attractive faces without introducing other artifacts—an ecological validity concern for some studies.
Anger is hard with still images. Anger stimuli often co-activated sadness, fear, or disgust; static images may be suboptimal for reliably isolating anger compared to dynamic media.
Cultural coverage is necessarily partial. Six broad cultural clusters (e.g., “Asian”) cannot capture within-region diversity; some cues risk stereotyping or over-emphasizing traditional elements. Continuous, local expert review was essential; within-culture disagreements were documented.
Language scope. Validation studies were conducted in English, which can attenuate responses in non-native readers; future multi-language replications are encouraged.
Mitigations recommended.
Document prompts and model/provider versions (an “immortal” textual layer), keep a light change log (e.g., changes.txt), pre-register selection/analysis where feasible, include cultural expert review rounds, and re-validate subsets when swapping generators or policies change.
Transformation(s) Applied
To avoid repetition and keep everything reproducible, all field mappings and transformations are documented in code. Please see the OSF “Analysis Code & Outputs” component.
Where to find field definitions:
Human-readable descriptions of labels and scales are provided in the paper and its supplementary materials. This Data Card summarizes them; the canonical/authoritative definitions and mappings should be taken from the manuscript and supplement.
All transformations (anomaly detection, cleaning, typing, joins, aggregations, and success-index computation) are fully documented and implemented in the project’s Python notebooks and scripts; treat those notebooks as the single source of truth.
Nurturant love), culture codes (euna, af, arab, as, ind, sa), and boolean fields.Method(s) Used: Lookup tables; regex normalizers; validation against controlled vocab.
image_name. Ensured one row per image in aggregate tables.changes.txt at the project root to record any post-release replacements/removals (date, OSF version, reason, replacement).To avoid repetition: details of scales, procedures, and QC are expalined in detail in the paper + supplement. [https://osf.io/v8dkm/overview]
[edit!!!] add the script from the study + record the video of filling the survey
image_id ↔ metadata joins; one aggregate row per image.To avoid repetition: details of validation procedures are in the paper + supplement. [https://osf.io/v8dkm/overview]
Sampling Type | Value ---|--- Upstream Source | Complete LAI-GAI image library (847 images) Total data sampled | Not applicable (full release) Sampling rate | Not applicable Notes | Any sampling occurs downstream (by users) for specific studies or train/test splits.
If you create subsets for experiments or ML splits, we recommend:
Safety Level
Known Safe Dataset(s) or Data Type(s)
Best Practices
image_name).Known Unsafe Dataset(s) or Data Type(s)
Limitation(s) and Recommendation(s)
Safety Level
Acceptable Sampling Method(s)
Best Practice(s)
Risk(s) and Mitigation(s)
Limitation(s) and Recommendation(s)
Any known relationships (e.g., between sex_variant, age_variant, and culture, or co-varying visual cues) are documented in the manuscript. Please refer to the paper for details. [https://osf.io/v8dkm/files/8t34y]
Additional Notes:
| Scale | Split-half r | |-------------------|:------------:| | Amusement | 0.9377 | | Anger | 0.9685 | | Attachment love | 0.9284 | | Awe | 0.8859 | | Craving | 0.9412 | | Disgust | 0.9775 | | Excitement | 0.9388 | | Fear | 0.9696 | | Joy | 0.9733 | | Neutral | 0.8649 | | Nurturant love | 0.9235 | | Sadness | 0.9769 | | Negative (valence)| 0.9859 | | Positive (valence)| 0.9829 | | Aroused | 0.6922 | | Calm | 0.9593 | | Approach | 0.9494 | | Avoid | 0.9672 |
Notes: Values are split-half human reliabilities computed from your validation ratings; the table shows the per-scale ceilings against which model–human correlations should be interpreted.
Dataset Use(s)
Notable Feature(s)
Usage Guideline(s)
No official benchmarks have been released yet.
To aid comparability, users can report the following (suggested):
Model Name: (fill in)
Typical model families: Random Forest regression; CNN finetuning (e.g., ResNet/EfficientNet); transformer/hybrid (e.g., ViT or CLIP embeddings + shallow regressor).
Hyperparameter tuning (“hypertuning”) parameters: (fill in; e.g., search strategy, search space, trials, selection metric, seed)
Evaluation Results (suggested metrics)
Caption: Report metrics overall. Include OSF DOI + version and the exact image list.
Additional Notes: Please share code/splits to enable replication.
No official baselines yet; suggested protocol below.
Method used (suggested):
Process (minimum to report):
Results:
No expectations set yet. Recommended baseline to establish:
Known Caveats (general):
Additional Notes: When you publish results, please include: OSF DOI + version, exact split manifests, and the metric script so others can reproduce your scores.
Access Type
Documentation Link(s)
Dataset Website URL
Prerequisite(s)
Policy Link(s)
Direct download URL / Other repository URL
Not applicable — open access.
Notes: If any file is restricted in the future (e.g., raw participant-level data), the OSF component will state access conditions and contact details. File integrity is provided via OSF versioning.
Indefinite (persistent DOIs on OSF).
Complies with OSF hosting and versioning guidelines; follows institutional ethics approvals and publication policies.
No scheduled deletion. Items remain available via OSF DOIs.
Takedown is on demand (see Deletion Event Summary).
If a takedown/correction is needed:
changes.txt (e.g., [2025-11-13] v1.0.2 — Removed LAI-GAI_fake_image.jpg - cultural concern).changes.txt (date, OSF version, item removed/replaced, brief reason).Because content is synthetic and non-identifying, full “right-to-be-forgotten” workflows generally do not apply; cultural or ethical concerns are handled via deprecation/replace.
changes.txt at the project root (date, OSF version, added/removed/replaced items, brief reason).Open, versioned, DOI-backed dataset on OSF; no PII; deprecation-first approach for corrections; transparent changes for reproducibility.
If you later host mirrors (e.g., Zenodo/GitHub), state mirroring cadence and point-of-truth as the OSF DOIs.
First Version
Note(s) and Caveat(s)
Cadence
Last and Next Update(s)
Changes on Update(s)
changes.txt.Additional Notes
Guidelines & Steps: Cite both the dataset (OSF components) and the associated manuscript.
BiBTeX (paper: Using AI to Generate Affective Images: Methodology and Initial Library):
@article{Behnke_LAIGAI_2025,
title = {Using AI to Generate Affective Images: Methodology and Initial Library (LAI-GAI)},
author = {Behnke, Maciej and Kłoskowski, Maciej and Klichowski, Michał and {others}},
year = {2025},
note = {Manuscript / preprint},
url = {<add preprint/DOI when available>}
}
BibTeX (dataset — Images component):
@misc{LAIGAI_images_2025,
title = {Library of AI-Generated Affective Images (LAI-GAI) — Images Component},
author = {Behnke, Maciej and collaborators},
year = {2025},
doi = {10.17605/OSF.IO/K8XVH},
url = {https://doi.org/10.17605/OSF.IO/K8XVH}
,
note = {Version X.Y}
}
Amusement — defying expectations, often eliciting laughter.
Anger — antagonism toward someone or something perceived as deliberately harmful or unfair.
Attachment Love — desire for closeness, interdependence, and intimacy.
Awe — response to something vast and beyond ordinary frames of reference; involves wonder/admiration and a sense of smallness.
Craving — strong desire, associated here with the sensory pleasure of consuming food.
Disgust — strong aversion/repulsion toward something perceived as offensive, contaminating, or unpleasant.
Excitement — high-intensity response to novelty, challenge, or excellence, often with some degree of risk.
Fear — response to a perceived threat or danger.
Joy — feeling brought about by good fortune and well-being.
Neutral — absence of strong positive or negative emotion; calm and balanced, low arousal.
Nurturant Love — caregiving and protection, often toward offspring or vulnerable individuals.
Sadness — feelings of disappointment, grief, or hopelessness.
Positive (Valence) — feelings of pleasure, satisfaction, or well-being.
Negative (Valence) — feelings of discomfort, displeasure, or distress.
Calm (Arousal) — tranquility and composure; low emotional activation.
Aroused (Arousal) — heightened activation (e.g., increased energy or alertness).
Motivated to Approach (Motivation) — drive to pursue a goal/object/situation (curiosity, attraction, reward).
Motivated to Avoid (Motivation) — drive to move away from an undesired/threatening object/situation (fear, discomfort, aversion).
Note: These definitions were provided to annotators during the study and form the basis for image ratings.