# Child Spawning

A task can spawn child tasks at runtime, including other durable tasks or entire DAG workflows. Children run independently on any available worker, and the parent can wait for their results.

Both durable tasks and DAG tasks support child spawning with the same core API. The key difference is that durable tasks free the parent's worker slot while waiting (via [eviction](/v1/task-eviction)), while DAG tasks hold their slot for the duration of execution.

#### Durable Tasks

## Spawning from Durable Tasks

A durable task can spawn child tasks at runtime. This is one of the core reasons to choose durable tasks over DAGs: the shape of work is decided as the task runs, not declared upfront.

> **Info:** Waiting for child results puts the parent task into an [evictable
>   state](/v1/task-eviction), the worker slot is freed and the parent is
>   re-queued when results are available.

Because the parent is evicted while children execute:

- **No slot waste** — the parent doesn't hold a worker slot while N children run across your fleet.
- **No deadlocks** — because the parent is evicted, it can't starve its own children for slots.
- **Dynamic N** — you decide how many children to spawn based on runtime data (input size, API responses, agent reasoning).

### Spawning child tasks

Use the context object to spawn a child task from within a durable task. The child runs independently on any available worker.

#### Python

```python
from examples.fanout.worker import ChildInput, child_wf

# 👀 example: run this inside of a parent task to spawn a child
child_wf.run(
    ChildInput(a="b"),
)
```

#### Typescript

```typescript
export const parentSingleChild = hatchet.task({
  name: 'parent-single-child',
  fn: async () => {
    const childRes = await child.run({ N: 1 });

    return {
      Result: childRes.Value,
    };
  },
});
```

#### Go

```go
// Inside a parent task
childResult, err := childWorkflow.Run(hCtx, ChildInput{
	Value: 1,
})
if err != nil {
	return err
}
```

#### Ruby

```ruby
FANOUT_CHILD_WF.run({ "a" => "b" })
```

### Parallel fan-out

Spawn many children at once and wait for all results. The parent is evicted during the wait, so it consumes no resources while children run.

#### Python

```python
async def run_child_workflows(n: int) -> list[dict[str, Any]]:
    return await child_wf.aio_run_many(
        [
            child_wf.create_bulk_run_item(
                input=ChildInput(a=str(i)),
            )
            for i in range(n)
        ]
    )
```

#### Typescript

```typescript
type ParentInput = {
  N: number;
};

export const parent = hatchet.task({
  name: 'parent',
  fn: async (input: ParentInput, ctx) => {
    const n = input.N;
    const promises = [];

    for (let i = 0; i < n; i++) {
      promises.push(child.run({ N: i }));
    }

    const childRes = await Promise.all(promises);
    const sum = childRes.reduce((acc, curr) => acc + curr.Value, 0);

    return {
      Result: sum,
    };
  },
});
```

#### Go

```go
// Run multiple child tasks in parallel using goroutines
var wg sync.WaitGroup
var mu sync.Mutex
results := make([]*ChildOutput, 0, n)

wg.Add(n)
for i := 0; i < n; i++ {
	go func(index int) {
		defer wg.Done()
		result, err := childWorkflow.Run(hCtx, ChildInput{Value: index})
		if err != nil {
			return
		}

		var childOutput ChildOutput
		err = result.Into(&childOutput)
		if err != nil {
			return
		}

		mu.Lock()
		results = append(results, &childOutput)
		mu.Unlock()
	}(i)
}
wg.Wait()
```

#### Ruby

```ruby
def run_child_workflows(n)
  FANOUT_CHILD_WF.run_many(
    n.times.map do |i|
      FANOUT_CHILD_WF.create_bulk_run_item(
        input: { "a" => i.to_s }
      )
    end
  )
end
```

### What children can be

A durable task can spawn any runnable:

Child type, Example

**Regular task**, Spawn a stateless task for a quick computation or API call.
**Durable task**, Spawn another durable task that has its own checkpoints, sleeps, and event waits.
**DAG workflow**, Spawn an entire multi-task workflow and wait for its final output.

### Error handling

#### Python

```python
try:
    child_wf.run(
        ChildInput(a="b"),
    )
except Exception as e:
    print(f"Child workflow failed: {e}")
```

#### Typescript

```typescript
export const withErrorHandling = hatchet.task({
  name: 'parent-error-handling',
  fn: async () => {
    try {
      const childRes = await child.run({ N: 1 });

      return {
        Result: childRes.Value,
      };
    } catch (error) {
      // decide how to proceed here
      return {
        Result: -1,
      };
    }
  },
});
```

#### Go

```go
result, err := childWorkflow.Run(hCtx, ChildInput{Value: 1})
if err != nil {
	// Handle error from child workflow
	fmt.Printf("Child workflow failed: %v\n", err)
	// Decide how to proceed - retry, skip, or fail the parent
}
```

#### Ruby

```ruby
begin
  FANOUT_CHILD_WF.run({ "a" => "b" })
rescue StandardError => e
  puts "Child workflow failed: #{e.message}"
end
```

#### DAGs

## Spawning from DAG Tasks

DAG tasks can also spawn child tasks procedurally during execution. This lets you combine a fixed pipeline structure with dynamic child work inside individual tasks.

### Creating parent and child tasks

To implement child task spawning, you first need to create both parent and child task definitions.

#### Python

First, we'll declare a couple of tasks for the parent and child:

```python
class ParentInput(BaseModel):
    n: int = 100


class ChildInput(BaseModel):
    a: str


parent_wf = hatchet.workflow(name="FanoutParent", input_validator=ParentInput)
child_wf = hatchet.workflow(name="FanoutChild", input_validator=ChildInput)


@parent_wf.task(execution_timeout=timedelta(minutes=5))
async def spawn(input: ParentInput, ctx: Context) -> dict[str, Any]:
    print("spawning child")

    result = await child_wf.aio_run_many(
        [
            child_wf.create_bulk_run_item(
                input=ChildInput(a=str(i)),
                additional_metadata={"hello": "earth"},
                key=f"child{i}",
            )
            for i in range(input.n)
        ],
    )

    print(f"results {result}")

    return {"results": result}
```

We also created a step on the parent task that spawns the child tasks. Now, we'll add a couple of steps to the child task:

```python
@child_wf.task()
async def process(input: ChildInput, ctx: Context) -> dict[str, str]:
    print(f"child process {input.a}")
    return {"status": input.a}


@child_wf.task(parents=[process])
async def process2(input: ChildInput, ctx: Context) -> dict[str, str]:
    process_output = ctx.task_output(process)
    a = process_output["status"]

    return {"status2": a + "2"}
```

And that's it! The fanout parent will run and spawn the child, and then will collect the results from its steps.

#### Typescript

```typescript
import sleep from '@hatchet-dev/typescript-sdk/util/sleep';
import { hatchet } from '../hatchet-client';

// (optional) Define the input type for the workflow
export type ChildInput = {
  Message: string;
};

export type ParentInput = {
  Message: string;
};

export const child = hatchet.workflow({
  name: 'child',
});

export const child1 = child.task({
  name: 'child1',
  fn: async (input: ChildInput, ctx) => {
    await sleep(30 * 1000);

    ctx.logger.info('hello from the child1', { hello: 'moon' });
    return {
      TransformedMessage: input.Message.toLowerCase(),
    };
  },
});

export const child2 = child.task({
  name: 'child2',
  fn: (input: ChildInput, ctx) => {
    ctx.logger.info('hello from the child2');
    return {
      TransformedMessage: input.Message.toLowerCase(),
    };
  },
});

export const child3 = child.task({
  name: 'child3',
  parents: [child1, child2],
  fn: async (input: ChildInput, ctx) => {
    ctx.logger.info('hello from the child3');
    return {
      TransformedMessage: input.Message.toLowerCase(),
    };
  },
});

export const parent = hatchet.task({
  name: 'parent',
  executionTimeout: '5m',
  fn: async (input: ParentInput, ctx) => {
    const c = await ctx.runChild(child, {
      Message: input.Message,
    });

    return {
      TransformedMessage: 'not implemented',
    };
  },
});
```

#### Go

```go
type ParentInput struct {
	Count int `json:"count"`
}

type ParentOutput struct {
	Sum int `json:"sum"`
}

func Parent(client *hatchet.Client) *hatchet.StandaloneTask {
	return client.NewStandaloneTask("parent-task",
		func(ctx hatchet.Context, input ParentInput) (ParentOutput, error) {
			log.Printf("Parent workflow spawning %d child workflows", input.Count)

			// Spawn multiple child workflows and collect results
			sum := 0
			for i := 0; i < input.Count; i++ {
				log.Printf("Spawning child workflow %d/%d", i+1, input.Count)

				// Spawn child workflow and wait for result
				childResult, err := Child(client).Run(ctx, ChildInput{
					Value: i + 1,
				})
				if err != nil {
					return ParentOutput{}, fmt.Errorf("failed to spawn child workflow %d: %w", i, err)
				}

				var childOutput ChildOutput
				err = childResult.Into(&childOutput)
				if err != nil {
					return ParentOutput{}, fmt.Errorf("failed to get child workflow result: %w", err)
				}

				sum += childOutput.Result

				log.Printf("Child workflow %d completed with result: %d", i+1, childOutput.Result)
			}

			log.Printf("All child workflows completed. Total sum: %d", sum)
			return ParentOutput{
				Sum: sum,
			}, nil
		},
	)
}

type ChildInput struct {
	Value int `json:"value"`
}

type ChildOutput struct {
	Result int `json:"result"`
}

func Child(client *hatchet.Client) *hatchet.StandaloneTask {
	return client.NewStandaloneTask("child-task",
		func(ctx hatchet.Context, input ChildInput) (ChildOutput, error) {
			return ChildOutput{
				Result: input.Value * 2,
			}, nil
		},
	)
}
```

#### Ruby

```ruby
FANOUT_PARENT_WF = HATCHET.workflow(name: "FanoutParent")
FANOUT_CHILD_WF = HATCHET.workflow(name: "FanoutChild")

FANOUT_PARENT_WF.task(:spawn, execution_timeout: 300) do |input, ctx|
  puts "spawning child"
  n = input["n"] || 100

  result = FANOUT_CHILD_WF.run_many(
    n.times.map do |i|
      FANOUT_CHILD_WF.create_bulk_run_item(
        input: { "a" => i.to_s },
        options: Hatchet::TriggerWorkflowOptions.new(
          additional_metadata: { "hello" => "earth" },
          key: "child#{i}"
        )
      )
    end
  )

  puts "results #{result}"
  { "results" => result }
end
```
```ruby
FANOUT_CHILD_PROCESS = FANOUT_CHILD_WF.task(:process) do |input, ctx|
  puts "child process #{input['a']}"
  { "status" => input["a"] }
end

FANOUT_CHILD_WF.task(:process2, parents: [FANOUT_CHILD_PROCESS]) do |input, ctx|
  process_output = ctx.task_output(FANOUT_CHILD_PROCESS)
  a = process_output["status"]
  { "status2" => "#{a}2" }
end
```

### Running child tasks

To spawn and run a child task from a parent task, use the appropriate method for your language:

#### Python

```python
from examples.fanout.worker import ChildInput, child_wf

# 👀 example: run this inside of a parent task to spawn a child
child_wf.run(
    ChildInput(a="b"),
)
```

#### Typescript

```typescript
export const parentSingleChild = hatchet.task({
  name: 'parent-single-child',
  fn: async () => {
    const childRes = await child.run({ N: 1 });

    return {
      Result: childRes.Value,
    };
  },
});
```

#### Go

```go
// Inside a parent task
childResult, err := childWorkflow.Run(hCtx, ChildInput{
	Value: 1,
})
if err != nil {
	return err
}
```

#### Ruby

```ruby
FANOUT_CHILD_WF.run({ "a" => "b" })
```

### Parallel child task execution

Spawn multiple child tasks in parallel:

#### Python

```python
async def run_child_workflows(n: int) -> list[dict[str, Any]]:
    return await child_wf.aio_run_many(
        [
            child_wf.create_bulk_run_item(
                input=ChildInput(a=str(i)),
            )
            for i in range(n)
        ]
    )
```

#### Typescript

```typescript
type ParentInput = {
  N: number;
};

export const parent = hatchet.task({
  name: 'parent',
  fn: async (input: ParentInput, ctx) => {
    const n = input.N;
    const promises = [];

    for (let i = 0; i < n; i++) {
      promises.push(child.run({ N: i }));
    }

    const childRes = await Promise.all(promises);
    const sum = childRes.reduce((acc, curr) => acc + curr.Value, 0);

    return {
      Result: sum,
    };
  },
});
```

#### Go

```go
// Run multiple child tasks in parallel using goroutines
var wg sync.WaitGroup
var mu sync.Mutex
results := make([]*ChildOutput, 0, n)

wg.Add(n)
for i := 0; i < n; i++ {
	go func(index int) {
		defer wg.Done()
		result, err := childWorkflow.Run(hCtx, ChildInput{Value: index})
		if err != nil {
			return
		}

		var childOutput ChildOutput
		err = result.Into(&childOutput)
		if err != nil {
			return
		}

		mu.Lock()
		results = append(results, &childOutput)
		mu.Unlock()
	}(i)
}
wg.Wait()
```

#### Ruby

```ruby
def run_child_workflows(n)
  FANOUT_CHILD_WF.run_many(
    n.times.map do |i|
      FANOUT_CHILD_WF.create_bulk_run_item(
        input: { "a" => i.to_s }
      )
    end
  )
end
```

### Error handling

#### Python

```python
try:
    child_wf.run(
        ChildInput(a="b"),
    )
except Exception as e:
    print(f"Child workflow failed: {e}")
```

#### Typescript

```typescript
export const withErrorHandling = hatchet.task({
  name: 'parent-error-handling',
  fn: async () => {
    try {
      const childRes = await child.run({ N: 1 });

      return {
        Result: childRes.Value,
      };
    } catch (error) {
      // decide how to proceed here
      return {
        Result: -1,
      };
    }
  },
});
```

#### Go

```go
result, err := childWorkflow.Run(hCtx, ChildInput{Value: 1})
if err != nil {
	// Handle error from child workflow
	fmt.Printf("Child workflow failed: %v\n", err)
	// Decide how to proceed - retry, skip, or fail the parent
}
```

#### Ruby

```ruby
begin
  FANOUT_CHILD_WF.run({ "a" => "b" })
rescue StandardError => e
  puts "Child workflow failed: #{e.message}"
end
```

## Common Patterns

### Dynamic fan-out / fan-in

Process a list of items whose length is only known at runtime. Spawn one child per item, collect all results, then continue. Document processing and batch processing are canonical examples: when a batch of files arrives, a parent fans out to one child per document; each child parses, extracts, and validates its document in parallel across your worker fleet.


[Concurrency](/v1/concurrency) controls how many children run simultaneously. Hatchet distributes child tasks across available workers, so adding workers increases throughput without code changes. For rate-limited external services (OCR, LLM APIs), combine with [Rate Limits](/v1/rate-limits) to throttle child execution across all workers.

### Agent loops

An **agent loop** is implemented by having a durable task spawn a new child run of itself with updated input until a termination condition is met. Each iteration is a separate child task, giving full observability in the dashboard. AI agents use this pattern when they reason about what to do, spawn a subtask (or a sub-workflow), inspect the result, and decide whether to continue, branch, or stop.


### Recursive workflows

A durable task spawns child durable tasks, each of which may spawn their own children. This creates a tree of work that's entirely driven by runtime logic, useful for crawlers, recursive search, and tree-structured computations.

## Use cases

1. **Dynamic fan-out processing** — When the number of parallel tasks is determined at runtime.
2. **Reusable workflow components** — Create modular workflows that can be reused across different parent workflows.
3. **Resource-intensive operations** — Spread computation across multiple workers.
4. **Agent-based systems** — Allow AI agents to spawn new workflows based on their reasoning.
5. **Long-running operations** — Break down long operations into smaller, trackable units of work.
