Create your own GenAI Image Generator App like MidJourney or DALLE-2

create-your-own-genai-image-generator-app-like-midjourney-or dalle-2

Simple React App to demonstrate Generative AI Text-to-Image Capability using any third-party APIs

In the fast-paced world of web development, staying ahead often involves incorporating cutting-edge technologies into our projects. One such innovation that has been gaining traction is the integration of Artificial Intelligence (AI) into web applications. In this article, we’ll explore how I leveraged third-party APIs built by Segmind to seamlessly integrate AI-generated images into my React app, pushing the boundaries of creativity and user engagement.

What’s Generative AI?

It refers to a class of artificial intelligence systems designed to generate new content, such as images, text, or even music, often in a way that mimics human creativity. These systems, often based on neural networks, can learn patterns and generate novel outputs without explicit programming. Generative AI has applications in various fields, including art, content creation, and data synthesis, contributing to innovative solutions and creative outputs.

Image Generation in AI:

Image generation in AI involves using artificial intelligence models to create new, realistic images. This process often leverages generative models, which are trained on large datasets to learn patterns and generate novel content.

Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. Uses deep neural networks to generate high-quality images. It trains models using input images and generates new images based on them until the network reaches stability.

DALL·E, developed by OpenAI, is a variant of the GPT (Generative Pre-trained Transformer) architecture designed for image generation. It can generate images from textual descriptions and has gained attention for its ability to create unique and imaginative visuals.

Vision for our App:

When conceptualising my React app, I envisioned an immersive user experience that went beyond traditional static content. I wanted to incorporate dynamic, AI-generated images that would not only captivate users but also add a touch of uniqueness to each interaction. To achieve this, I turned to third-party APIs specialising in AI image generation.

App Snapshot

Top Features of the App:

  1. Generate the image from the prompt with various parameters.
  2. Surprise Me option to get the idea instantly, if you’re falling short ;).
  3. Few previously generated images to pick from.
  4. The recent history of the generated images can be picked again.
  5. Download the generated images.
  6. Responsive web app.

Technologies used:

  1. Uses the latest React hooks to fetch the SegMind text2Img API.
  2. Deployed on Firebase.
  3. Recent history has been stored on the localStorage.

Choosing the Right API:

The first step was selecting a suitable API that aligned with my vision. After careful research, I settled on SegMind text2Img API (Create your account and get the API token from, a versatile platform renowned for its powerful image generation capabilities. It offered a range of features, including style transfer, deep dreaming, and more, making it the perfect choice for injecting creativity into my app.

Use my referral link to sign up and get extra credits for API [–4e87-af7a-8e4f3a774689].
You can explore their API Postman collection. We are going to use API sdxl1.0-txt2img in our app.

With the API chosen, we seamlessly integrated it into our React app. Leveraging React’s component-based architecture, I created a dedicated component responsible for handling API requests and rendering the AI-generated images. Thanks to the simplicity and flexibility of React, this process was smooth and well-organized.

By leveraging third-party APIs, developers can effortlessly integrate AI capabilities into their projects, pushing the boundaries of what’s possible in web development.

React Code

You can follow all the code behind our app on my GitHub:, with all the proper file structure and required components, static content, and other config files.


import React, { useState, useEffect } from "react";
import ImageBox from "../components/ImageBox";
import NavBar from "../components/NavBar";
import { fetchImages } from "../services/model-api";
import { getRandom, loaderMessages, promptIdeas } from "../utilities/utils";
import ChooseResults from "../components/ChooseResults";
import RecentResults from "../components/RecentResults";

const Home = () => {
  const [showLoader, setShowLoader] = useState(false);
  const [imageResult, setImageResult] = useState(null);
  const [promptQuery, setPromptQuery] = useState("");
  const [radioValue, setRadioValue] = useState("20");
  const [dropDownValue, setDropDownValue] = useState("DDIM");
  const [seedValue, setSeedValue] = useState(17123564234);
  const [loaderMessage, setLoaderMessage] = useState(loaderMessages[0]);

  useEffect(() => {
    const loaderInterval = setInterval(() => {
    }, 3000);
    // to avoid memory leak
    return () => {
  }, [loaderMessage]);

  const handleSearch = (event) => {

  const handleChange = (event) => {
    if ( === "radio") {
    } else if ( === "dropdown") {
    } else {

  const handleGenerate = (e) => {

  const fetchData = async () => {
    try {

      const imageBlob = await fetchImages(

      const fileReaderInstance = new FileReader();
      // This event will fire when the image Blob is fully loaded and ready to be displayed
      fileReaderInstance.onload = () => {
        let base64data = fileReaderInstance.result;
      // Use the readAsDataURL() method of the FileReader instance to read the image Blob and convert it into a data URL
    } catch (error) {
      // Handle error
      console.error("Error fetching images from API:", error);

  const handleSurpriseMe = (e) => {
    const surprisePrompt = getRandom(promptIdeas);

  const handleAvailOptions = (option) => {

  return (
      <NavBar />
      <div className="surpriseBox">
        <label>Bring your imaginations into reality!</label>
          placeholder="A plush toy robot sitting against a yellow wall"
        <button onClick={handleSurpriseMe}>Surprise Me</button>
      <div className="formBox">
        <div className="formValue">
          <label>Scheduler &nbsp;</label>
          <select name="dropdown" value={dropDownValue} onChange={handleChange}>
            <option value="Euler">Euler</option>
            <option value="LMS">LMS</option>
            <option value="Heun">Heun</option>
            <option value="DDPM">DDPM</option>
        <div className="formValue">
              checked={radioValue === "20"}
        <div className="formValue">
          <label>Seed &nbsp;</label>
        <button onClick={handleGenerate}>Generate the Image</button>

      {showLoader ? (
        <div style={{ margin: 40 }}>Blazing fast results... ⚡️⚡️⚡️</div>
      ) : (
          <ImageBox promptQuery={promptQuery} imageResult={imageResult} />
      <ChooseResults onSelect={handleAvailOptions} />
      <div className="slideShowMessage">{loaderMessage}</div>
      <div className="footer">Powered by SegMind</div>

export default Home;


import axios from "axios";
import { secret } from "../secret";

const { apiKey } = secret;

export const fetchImages = async (
) => {
  const options = {
    method: "POST",
    url: "",
    headers: {
      "x-api-key": `${apiKey}`,
      "Content-Type": "application/json",
    responseType: "arraybuffer",
    data: {
      prompt: promptCall,
      seed: seedValue,
      scheduler: dropDownValue,
      num_inference_steps: radioValue,
      negative_prompt: "NONE",
      samples: "1",
      guidance_scale: "7.5",
      strength: "1",
      shape: 512,

  try {
    const response = await axios.request(options);
    // convert raw blob as ArrayBuffer to an image blob with MIME type
    const imageBlob = new Blob([], { type: "image/jpeg" });
    // console.log(response, imageBlob);
    return imageBlob;
  } catch (error) {
    console.error("Error while fecthing Gen AI model API", error);

Use SegMind text2Img API (Create your account and get the API token from, and replace the API key in the below variable.

It is ready to use for all of you, you can fork the repo, use your SegMind or any other third-party APIs, and make it yours!

Run in local:

Go to your project directory in the VSCode terminal/Console, you can run:

npm run build 
npm run start

Runs the app in the development mode. Open http://localhost:3000 to view it in your browser. The page will reload when you make changes. You may also see any lint errors in the console.

Deploy on Firebase:

We can follow this quick article for deploying our react image generation app on Firebase - a free deployment tool by Google for developers.

Run in Production:

Try our app here

In case the API token expires, use yours in the code deploy on Firebase, and then run.

Share it with your friends and family, and show your swanky product to your colleagues.

Further Scope for Developers:

I would like the new developers to fork the GitHub repository and work on to add few of the below features:

  1. Create an image slideshow with a recent history of generated images.
  2. Reverse the order for recent history images.
  3. Create an Image slideshow of the recent history of generated images.
  4. Build REST APIs to post images on your developed servers and database (free tiers), fetch those APIs, and show them on the app.
  5. Reverse the order for recent history images.
  6. Create an Image slideshow of the recent history of generated images.
  7. Build REST APIs to post recent images on your developed servers and database (Render, Vercel, MongoDB Atlas free tiers), and fetch those from APIs and show them on the app.
  8. Add i18n localization to the project using react-i18next.
  9. Write Unit Test cases using @testing-library/react.

As technology continues to evolve, the fusion of AI and web development offers exciting opportunities for developers to create truly unique and captivating user experiences. Embrace the power of AI in your React projects, and watch as your applications come to life with dynamic, intelligent content.

That’s all folks for this article!

Hope it will help you in creating your own MidJourney and DALLe-2 like applications, hope it is easy and fun!😃

Write your suggestions and feedback in the comment section below.

If you really learned something new with this article or it really made your dev work faster than before, like it, save it and share it with your colleagues.

Also, I have recently started creating tech content on my YouTube channel, just have a look at it TechMonkKapil, and subscribe if you like the content!🤝

Also, we are building a tech community on Telegram(Tech Monk Army) and Discord(Tech Monk Army). Join if you are looking to interact with like-minded folks.

I have been writing tech blogs for quite a time now, and have mostly published through my Medium account, this is my first tech article/tutorial in Hope you guys will shower love to it!🤩

Let’s be connected on LinkedIn and Twitter for more such engaging Tech Articles and Tutorials.🤝

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

⚡Top GitHub Repositories for UI Components

Next Post

What Is Whirling Machine?

Related Posts