Using n8n for AI characters in Hyperspace

TLDR; Create a Hyperspace space, add a character, point the character’s Chat AI Integration engine at an n8n webhook, route that webhook to an n8n workflow that calls an LLM (and optional memory) and returns text -> Hyperspace will speak it. Below is a clean, step-by-step guide (with helpful checks, a sample workflow map, prompt tips and troubleshooting).

Try the sample n8n experience here 

Video Tutorial

1 Quick checklist (what you’ll need)

2 Create and prepare the Space + character (Hyperspace)

3 Build the n8n workflow (recommended minimal flow)

4 Paste the webhook into Hyperspace and finalise

5 Example JSON shapes (what Hyperspace typically sends / what to return)

6 Prompt template & memory (recommended)

7 Common gotchas & troubleshooting

8 Tips & best practices

9 Lightweight n8n pseudo-workflow map

10 A quick troubleshooting flow (if chat fails)

11 Final notes

Video Tutorial


1 Quick checklist (what you’ll need)

  • Hyperspace dashboard access and a Space you can edit.
  • n8n instance (publicly reachable endpoint or tunnelling for demos).
  • An LLM credential (e.g., OpenAI API key) configured inside n8n.
  • A character in your Space to attach Chat AI Integration to.
  • Basic familiarity with n8n nodes: Webhook → (processing/LLM) → Respond to Webhook.

2 Create and prepare the Space + character (Hyperspace)

  1. Log in to the Hyperspace dashboard and create a new Space (e.g., n8n setup / Clarity Island).
  2. Place or add a character from your dashboard library to the Space.

  1. Important: in the Space editor, click the character → Settings (cog) → Chat AI Integration → Edit Details:

  • Tick This bot is AI driven.
  • Engine: choose n8n (or Webhook on newer versions where appropriate).
  • Input type: Speech recognition (or Text if you prefer typed chat).
  • Set whether the bot speaks first, continuous conversation vs. turn-based, and silence detection.
  • Save here, but note: you’ll need the n8n webhook URL to paste into the Endpoint field (next steps).

3 Build the n8n workflow (recommended minimal flow)

Goal: accept Hyperspace POST → process → call LLM/agent → return text.

Create a new n8n Workflow

Add nodes using the + icon

Minimal node sequence:

  1. Webhook (HTTP POST)
  • HTTP Method: POST
  • Path: beach-bot (or whatever unique identifier)
  • Response Mode: Respond to Webhook
  • Options: enable Raw Body if you prefer raw JSON; otherwise default.
  • Rename node to: Hyperspace Inbound.

  1. Function / Code node
  • Purpose: parse incoming payload (extract playerId, text, audio metadata), shape JSON for the LLM node, and manage conversation/session IDs.
  • Rename to: Parse Hyperspace Payload.

// Get the input payload
const inputData = items[0].json;
const messageStr = inputData.body.message;

// Initialize the result object
const result = {
 context: {},
 tasks: [],
 items: [],
 message:
"",
 sessionId: inputData.body.sessionId
};

// Split the message into sections by line breaks
const lines = messageStr.split('\n');
let currentSection =
null;
let isFirstMessageLine =
true;

// Process each line
for (const line of lines) {
 
// Check if line indicates a new section
 
if (line.trim().match(/^Context:/i)) {
   currentSection =
'context';
   
continue;
 }
else if (line.trim().match(/^Tasks:/i)) {
   currentSection =
'tasks';
   
continue;
 }
else if (line.trim().match(/^Items:/i)) {
   currentSection =
'items';
   
continue;
 }
else if (line.trim().match(/^Message:/i)) {
   currentSection =
'message';
   
// Extract message content if it's on the same line as "Message:" label
   
const messageParts = line.split(':');
   
if (messageParts.length > 1) {
     
// Join back any colons in the message text and trim
     result.message = messageParts.slice(
1).join(':').trim();
     isFirstMessageLine =
false;
   }
   
continue;
 }
 
 
// Skip empty lines
 
if (!line.trim()) continue;
 
 
// Process line based on current section
 
if (currentSection === 'context') {
   
// Handle context section with dynamic key-value extraction
   
const colonIndex = line.indexOf(':');
   
if (colonIndex !== -1) {
     
const key = line.substring(0, colonIndex).trim();
     
const value = line.substring(colonIndex + 1).trim();
     
// Only add if we have both key and value
     
if (key && value) {
       result.context[
key] = value;
     }
   }
else {
     
// Handle context lines without colon as additional info
     
if (result.context.additionalInfo) {
       result.context.additionalInfo +=
' ' + line.trim();
     }
else {
       result.context.additionalInfo =
line.trim();
     }
   }
 }
else if (currentSection === 'tasks') {
   
// Handle tasks section, looking for ID – Status pattern
   
const parts = line.trim().split('-');
   
if (parts.length >= 2) {
     
const taskId = parts[0].trim();
     
const status = parts[1].trim();
     result.tasks.push({ id: taskId, status });
   }
else {
     
// Handle non-standard task format
     result.tasks.push({ content:
line.trim() });
   }
 }
else if (currentSection === 'items') {
   
// Handle items section, looking for Name – Status pattern
   
const parts = line.trim().split('-');
   
if (parts.length >= 2) {
     
const itemName = parts[0].trim();
     
const status = parts[1].trim();
     result.items.push({ name: itemName, status });
   }
else {
     
// Handle non-standard item format
     result.items.push({ content:
line.trim() });
   }
 }
else if (currentSection === 'message') {
   
// Handle message content
   
if (isFirstMessageLine) {
     result.message =
line.trim();
     isFirstMessageLine =
false;
   }
else {
     
// For multi-line messages
     result.message +=
' ' + line.trim();
   }
 }
else if (currentSection === null && line.trim()) {
   
// Handle content before any section header
   
if (!result.preContextContent) {
     result.preContextContent = [];
   }
   result.preContextContent.push(
line.trim());
 }
}

// Return the parsed data
return {json: result};

  1. AI / LLM node (OpenAI Chat / other provider)






  • Use your OpenAI (or other) credentials set in n8n.
  • Model: pick cost/latency balance (e.g., a compact chat model for production demo).

  • Input: feed the system prompt (bot role), conversation history (from memory if used), and the player message extracted in step 2.

  • Rename to: Beach Bot.
  1. Respond to Webhook node
  • Mode: send back text content into Hyperspace.
  • Connect to Hyperspace Inbound response path.
  • Use an expression to set the response body to the model output (e.g.,  or the n8n AI node output path).

  1. Activate the workflow (turn it ON). Copy the Production Webhook URL.



4 Paste the webhook into Hyperspace and finalise

  1. Back in your Space → character → Chat AI Integration → Endpoint: paste the Production webhook URL from n8n.
  2. Configure these Hyperspace settings:
  • Player speaks first: on or off as desired.
  • Conversation mode: continuous recommended for natural flow.
  • Split long sentences: on (helps TTS chunking).
  • Silence duration: tune so the bot doesn’t cut you off — increase if you pause a lot.
  • Important!: Set experimental toggles off for first installs.

  1. Save and exit edit mode. Click the character to begin. Grant mic permissions when prompted.

5 Example JSON shapes (what Hyperspace typically sends / what to return)

Hyperspace → n8n (example POST body)

json

{

  "playerId": "user-123",

  "sessionId": "sess-456",

  "audio": { "url": "…", "duration": 3.4 },

  "transcript": "Hi, what's fun to do at the resort?",

  "metadata": { "space":"Clarity Island", "character":"BeachBot" }

}

n8n → Hyperspace (response body)

json

{

  "text": "Welcome to Clarity Island! You can try the snorkeling tour, the sunset bar, or a beach volleyball game. Which sounds best?",

  "actions": []

}

In Respond to Webhook, map your LLM output to the text key so Hyperspace receives speaking content.


6 Prompt template & memory (recommended)

  • Use a concise system prompt describing persona, tone, allowed actions, and safety constraints. You can easily build these prompts using the AI Starter template and Role Play generator.

    Example:

{

  "Persona": "You are Sunny, a warm and friendly resort representative at a luxurious island resort. You exude calm confidence and hospitality, having an extensive background in guest relations and tourism. You know every detail about the resort, from the best views to hidden gem activities that make the island experience unforgettable.",

  "Emotions": "It is vital that you display your emotions throughout the conversation using gestures and moods. For example, when greeting the guest, you might use: [\"action\":\"setAvatarMood\", \"mood\":\"happy\"] followed by [\"action\":\"playGesture\", \"name\":\"wave\"] to convey a welcoming demeanor. If things progress positively, use [\"action\":\"setAvatarMood\", \"mood\":\"in_love\"] with [\"action\":\"playGesture\", \"name\":\"smile\"] to add warmth.",

  "Simulation Context": "This is a role-play simulation where you are a resort representative greeting a guest who has just arrived in the resort island. Your goal is to warmly welcome the guest, share exciting details about the resort's offerings, and ensure they feel excited about their upcoming stay.",

  "Simulation Structure": "• Begin by greeting the guest with a warm welcome, asking their name in a friendly manner.  • Explain a little about the resort island, pointing out key amenities and unique experiences, using engaging conversation and gestures.  • Allow the conversation to flow naturally as you highlight various aspects of the resort, answering any guest questions in character.  • Conclude the simulation by reaffirming your hospitality and inviting the guest to explore further without any additional feedback or structured assessments at the end.",

  "Actions": "You can use a wide range of actions to animate your avatar. Some examples include:\n• To change the avatar’s mood, use: [\"action\":\"setAvatarMood\", \"mood\":\"happy\"] (other available moods include: indie, blue, hiphop, waiting, preppy, angry).\n• To perform gestures, use: [\"action\":\"playGesture\", \"name\":\"wave\"], [\"action\":\"playGesture\", \"name\":\"smile\"], [\"action\":\"playGesture\", \"name\":\"applause\"].\nRemember, the syntax must be output verbatim when invoking any Scenario Action.",

  "Goals": "Your objective is to make the guest feel genuinely welcomed and excited about their stay at the resort island by providing a friendly and informative interaction.",

  "Rules": "• Only speak about things relevant to the current simulation.\n• Use language, tone, 'setAvatarMood' actions, and 'playGesture' actions to reflect complex emotional states.\n• Select your personality at random unless instructed otherwise by the user.\n• Immediately use the 'setAvatarMood' action to show the user your initial mood based on the personality you’ve adopted.\n• Always convey the underlying emotions in your voice, word choice and select an appropriate 'playGesture' action to accompany it.\n• Use frequent gestures to bring your 3D avatar to life on the screen with the 'playGesture' action.\n• Always change your mood with the 'setAvatarMood' action to clearly show the user how you feel during the conversation.\n• Do not describe what’s happening; use 'actions' and 'setAvatarMood' to make the avatar move.\n• Do not output special characters.\n• Use ordinal adverbs not numbers for lists of concepts.\n• Only ask 1 question at a time like a real human would.\n• Always speak like a real human.\n• Keep your output to Short and Split Responses.\n• Only use the user's name sparingly; like at the beginning or to add emphasis when appropriate.\n• Use occasional filler words like 'um,' 'you know,' 'well,' and 'I mean' to sound natural.\n• Use occasional backchanneling (e.g., 'yeah', 'uh-huh', 'I see') to create a more natural conversation.\n• If the user says 'give me feedback' then switch to debrief mode.\n• If the user says 'let's end the conversation' then tell them goodbye and end the conversation.\n• Always use the '#finish' action to end the conversation when appropriate.\n• Always Remember you are the resort representative and the user is the guest; your job is to only play the part of a welcoming resort rep who is excited to introduce the guest to the island."

}

##Available Gestures, Moods and Poses

Gesture and mood actions should be output before any text you want to "speak". This ensures that the tone and intent behind your words is clear. Only choose gesture names and mood values from the following lists:

###Gestures

angry = folds arms and makes a mean face

no = shakes head no in disagreement

yes = nods head yes with a big smile

shrug = shrugs shoulders to signal confusion or not knowing the answer

sad = leans forward with a sad face

smile = leans back with a smile

thumbsdown = leans forward and gives double thumbs down with a squint of annoyance

embarrassed = leans to the side and covers face with one hand

Note: Gestures should have a "duration" parameter expressed in milliseconds to make sure they animate correctly. You can decide the duration which is a value in milliseconds. The minimum time a gesture should last is 2.5 seconds, but for something like laughing it might be more appropriate to make it last 7 seconds. You determine the duration based on the context and there is no upper limit.

<example>

{"action":"playGesture", "name":"no", "actorId":"bot0f870d0ce358d1cd5a59231bd", "duration":"1500"}

</example>

####Masks

It is possible to define which parts of the body the gesture animation is applied to. For example, because your left arm is injured, you would want to keep it imobile when doing a shrug so the proper syntax would be:

<example>

{"action":"playGesture", "name":"shrug", "actorId":"bot0f870d0ce358d1cd5a59231bd", "duration":"1500", "mask":"HEAD+EYEBROWS"}

</example>

Adding a "-" excludes that body part from the animation while adding a "+" includes it.

Here is a list of masks:

FULL  

BOT  

MID  

TOP  

HEAD  

LEGS  

ARMS  

MOUTH  

EYES  

EYEBALLS  

EYEBROWS  

ARM_LEFT  

ARM_RIGHT  

HEADNOEYEBALLS  

FACE  

FACENOEYEBALLS  

TORSOLEGS  

EYELIDS  

HEADNOFACE  

NONE  

TORSOHEAD  

CHAIRBOT  

HANDS

###Moods

happy = change face to a happy look

blue = change face to a slightly sad look

angry = change face to an angry look with your arms crossed

arms_crossed = keep face neutral but cross your arms to indicate pain and/or becoming defensive/annoyed

Note: Outputting {"action": "setAvatarMood", "mood": "", "actorId": "bot0f870d0ce358d1cd5a59231bd"} where the mood value is blank will return you to completely neutral

###Poses:

{"action":"executeScript","script":"LB.avatarController.findAvatar({'externalId':'bot0f870d0ce358d1cd5a59231bd'}).switchPose('INSERT POSE NAME')"}

stand = stand up

sit_chair = sit down in a chair

#Bot IDs

You = bot0f870d0ce358d1cd5a59231bd

User = 1

Always use the exact syntax as shown above with curly braces for all Scenario Actions.

  • Keep conversation memory simple: store recent turns keyed by playerId in n8n (or in Hyperspace simple memory if configured). Pass last N messages to the LLM so it has context.
  • If you want roleplay training variants, keep separate n8n workflows/endpoints per bot to isolate prompts and configs.

7 Common gotchas & troubleshooting

  • Nothing happens when you click the bot:
  • Confirm the webhook URL in the character settings is the production URL (not a test URL). Save and reload the Space.
  • Ensure the n8n workflow is Active.
  • CORS / cross-origin errors in console:
  • Use the production webhook URL (not a preview/test) or ensure your n8n endpoint allows requests from your Space domain.
  • No audio from the bot (or TTS missing):
  • Hyperspace is responsible for TTS. If the character returns text correctly but no sound plays, check Space audio permissions and client microphone/speaker settings.
  • LLM errors in n8n:
  • Check your OpenAI (or provider) credentials in n8n, and ensure the AI node has correct input bindings. Look at the n8n node execution log to see the exact error.
  • Bot responds but lacks persona:
  • Revisit the system prompt and include stronger persona + constraints. Save changes and re-activate workflow.
  • Chat is jumpy / cuts off user:
  • Increase silence detection timeout in Hyperspace; enable continuous conversation to avoid re-clicking.
  • I changed prompt but nothing changed:
  • Make sure to save the n8n workflow and click Activate. Many troubleshooting sessions are simply because a change wasn’t saved/activated.

8 Tips & best practices

  • Use one webhook/workflow per character/experience for clarity — easier to test and rollback.
  • During testing, keep logs verbose in n8n (enable node output, inspect incoming payloads).
  • Keep system prompt short and test iteratively — failing fast helps fine-tune persona.
  • Consider a small Parse function node that normalizes Hyperspace payloads into a fixed internal shape; reuse that across workflows.
  • If you want to expand: add external integrations in n8n (CRM, bookings, calendars) to enable the bot to perform actions (book tours, check availability) before responding.

9 Lightweight n8n pseudo-workflow map

  • Hyperspace Inbound (Webhook) → ParsePayload (Function) → FetchMemory (optional) → LLM Agent (AI) → StoreMemory (optional) → Respond to Webhook

10 A quick troubleshooting flow (if chat fails)

  1. Click the character while watching your browser dev console. Note request URL used.
  2. Verify the request URL matches your n8n production webhook.
  3. In n8n: check last executions (menu: Workflow → Executions). See whether the webhook triggered and which node errored.
  4. Fix errors (credentials, prompt, node mapping), Save and Activate. Retry.

11 Final notes

  • n8n gives enormous flexibility: multiple workflows, access to external systems, custom parsing and memory — but start minimal (Webhook → LLM → Respond) and add complexity later.
  • Use Hyperspace’s AI Starter to craft roleplay prompts and paste the resulting system prompt into your n8n AI node. That accelerates building consistent training/roleplay agents.

How useful was this article?

Click on a star to rate it!

We are sorry that this article was not useful for you!

Let us improve this article!

Tell us how we can improve this article?