
“Be an Elephant” has incredible cinematographic elements and great storytelling. Tell us about your background and how you have worked in the media landscape so far.
I am Gurkan Atakan, I live in Turkey. After graduating from the Digital Film Academy in 2007, I started making short films and won awards for best experimental film in international film competitions. After that, I shot several commercials and music videos. I organized and led workshops for short films. After moving from Istanbul to a small village on the Mediterranean, I withdrew somewhat from the cinema and remained more of a movie watcher.
Over the past three years, I have been working intensively on the creation of visuals with artificial intelligence. It can’t be said that I actively produced films until the Runway Gen:48 competition. I had made a few small attempts before, but I didn’t take it very seriously. I was more of an observer of the development of artificial intelligence.
I saw the Runway GEN:48 competition as a challenge for myself. We had 48 hours to make the movie. The elements that had to be in the movie were published at the beginning of the 48-hour countdown. This included the script-writing phase. I could have used these elements to write a script using chatbots like ChatGPT or Gemini, but I wanted it to be original. I studied philosophy at university and the movie reflects my way of thinking. I have a huge elephant poster in my room. Mentor was an option among the mandatory elements. The script was written in my head, and it was easier than I thought to turn it into a movie. In the end, my film “Be an elephant” won the audience vote prize.
You have learned and practiced classical filmmaking. How do you see development with AI tools? How do you think the topic of AI-generated content will develop?
In the early days of video creation with artificial intelligence there was no wonder, techniques like depth of field already existed and were used, it was a technique of maybe 15-20 years to separate the object from the background and show it as a 2D image in a 3D video using depth of field, then with the development of artificial intelligence the work started to change, things changed a lot when AI started to care about what the object is, not just the object. At present, it does not seem possible to use artificial intelligence applications with programs such as After Effects. In picture-to-video, AI now recognizes the object and applies movement accordingly. That was unthinkable before. Especially text to video is almost a miracle. If we think technically, the next stage will not only recognize the objects in the image, but will also be able to generate the surroundings in 3D. This will allow us to move the camera freely. At this point, I think it will accelerate the transition from normal movies to AI movies.
“At the moment, it seems that software developers are more interested in this topic than movie directors. I think that will change in the near future. I don’t know if it’s destiny, but it seems that artificial intelligence will do to cinema what cinema did to theater. At the moment, producers, directors and screenwriters in Hollywood see artificial intelligence as the enemy, but they will make peace when the time comes.”
In the near future, I think applications like Netflix, Amazon, etc. will greet us on the home screen with a start option like AI movies or human movies.







Do you think that not only Hollywood companies will be able to make great films in the near future, but also young people in their rooms at home if they get to grips with the topic of AI?
Sure… there will always be opportunities for those who want to do it. I think in the beginning the dynamics of the sector will be determined by the human space, and then we might see more automated things in the future. Like the section we’ve selected for you on platforms like Netflix, it could even prepare movies to your liking. In this case, I think about what production companies can do differently than home movies. I’m thinking about a system with a working principle based on artificial intelligence modeling. Large teams can develop separate models for each film. First, they can train the AI for the style of movie they want to make, and then the movies they make with that model can be different from other movies. Otherwise, the films made with artificial intelligence will be exciting at the beginning, but will become similar over time, and the current problem of the film industry will continue to exist. Back to the answer to your question, yes, anyone can make their own movie at home, including music.
Tell us about the creation process. What was the workflow like for “Be an elephant”? What tools did you use and how did you proceed?
Since Runway organized this competition, I can say that I made 80% of the film with the Runway app. Like I said at the beginning, I wrote the script, I created the images in Runway in text to image, I think I created about 3 or 4 images in Midjourney, I used Runway gen-2 for image to video, but before that I upscaled the images in Magnific ai, I didn’t have to do that, but I can say that the images lost some detail in image to video. To minimize this situation, I increased the details in the pictures with Magnefic, if I hadn’t done that, the picture would have been a bit more pastel.
“Let me give a recommendation for those who are reading this interview, for the final good result they should use a video scaler like Topaz Video AI. … I haven’t used it, I don’t have this program, but I can say that it gives very good results.”
I was very sure what I was going to use for the music, it played in my head while I was writing the script, eventually I did the voice recordings in Runway Audio generation and put them together in Davinci Resolve.
That sounds great. What would be your tips for beginners at Runway? A certain type of prompt or the use of certain parameters? For example, how do you get opening scenes like the top shot sequence at the beginning?
Honestly I describe it like I’m explaining it to someone, I have no technique, I use cinema terms like “angle”, I don’t remember what I used for the first scene, but I guess I must have used terms like “wide camera angle”, “drone shot”, “bird’s eye view” together, when I got the image I wanted, I remember reusing the seed number and using terms like elephant in the forest, desert, river… to get different variations of the same image.
The way to make this work the way you want it to is through practice, you have to try until you get the picture you want, compare the results to the prompt you wrote, what resulted from what sentence, it will sound a little strange, but if you empathize with artificial intelligence you can steer it in the right direction 🙂 . Compare the pictures that others have taken and their prompts. This is also a good exercise.
Out of hundreds of entries, “Be an elephant” won the audience award. What has changed since then? Are you already working on the next projects?
This award gave me confidence, after the competition I began to delve deeper into the topic of video production with artificial intelligence, I have many projects in mind, some of them are still in the early stages, I will wait for the development of technology, some I will realize, now I spend most of my time experimenting with different techniques, I try to get different results by combining many models. My work on AI will continue.