Skip to content

Vector Fields in Augmented Reality

Published: at 04:00 PMSuggest Changes
Vector Fields in Augmented Reality

a-sumo/specs-samples

Snapcode

Try on Spectacles

To model wind around an aircraft wing, gravitational pull on satellites, or magnetic forces between poles, we use vector fields.

In essence, a vector field is a direction and magnitude assigned to every point in space.

More practically though, vector fields represent either motion itself, or what drives it such as a force.

There are three main approaches to visualize motion:

  1. We can divide space with sample points and show where each point is being pulled and how strongly. This is done with arrows.
  2. We can also show how an element with a mass moves when carried by the flow, which is done via particles.
  3. Or we can trace the paths a massless particle would follow through the field, with flow lines.

Rendering vector fields in 2D is fairly straightforward.

In 3D on AR glasses, however, we need to satisfy a few constraints:

  1. The geometry must look consistent from all viewing angles. A volumetric line shouldn’t look flat from certain directions.

  2. The geometry must adapt to its spatial context, which in this case is the direction and magnitude sampled at a spatial location. Importing static or pre-animated 3D objects wouldn’t allow this kind of adaptivity, which means we must generate the geometry procedurally.

  3. It must run smoothly in real time, while looking believable, so we need a minimal rendering procedure.

The most straightforward approach to render 3D lines is to generate tubes with capped ends. In Lens Studio, we can do this via the MeshBuilder API, which we first explored in our article on color spaces.

If the normals are encoded correctly (which requires having proper vertex ordering), GPU interpolation provides smooth color transitions across the tube surface and endcaps, which yields a believable look with minimal geometry.

The animation below illustrates the construction process:

Animation
Manim Source
python
Loading...
+Procedural Generation Code

The tube body is generated by creating rings of vertices along the length, then connecting them with triangles:

private generateSingleTube(gridX: number, gridY: number, pathLength: number, circleSegments: number): void {
    const startVertexIndex = this.meshBuilder.getVerticesCount();

    // Generate tube body vertices
    for (let i = 0; i < pathLength; i++) {
        const t = i / (pathLength - 1);

        for (let j = 0; j < circleSegments; j++) {
            const theta = (j / circleSegments) * Math.PI * 2;
            const localX = Math.cos(theta);
            const localY = Math.sin(theta);

            this.meshBuilder.appendVerticesInterleaved([
                0.0, 0.0, t,           // position: unused, unused, t
                0.0, 0.0, 1.0,         // normal: unused, unused, isTube=1
                localX, localY,        // texture0: unit circle coords
                gridX, gridY           // texture1: grid indices
            ]);
        }
    }

    // Generate indices for tube body
    for (let segment = 0; segment < pathLength - 1; segment++) {
        for (let i = 0; i < circleSegments; i++) {
            const current = startVertexIndex + segment * circleSegments + i;
            const next = startVertexIndex + segment * circleSegments + ((i + 1) % circleSegments);
            const currentNext = startVertexIndex + (segment + 1) * circleSegments + i;
            const nextNext = startVertexIndex + (segment + 1) * circleSegments + ((i + 1) % circleSegments);

            this.meshBuilder.appendIndices([
                current, next, currentNext,
                next, nextNext, currentNext
            ]);
        }
    }

    // Generate end caps for this tube
    this.generateSingleTubeCaps(gridX, gridY, startVertexIndex, pathLength, circleSegments);
}

The end caps are created by adding a center vertex and connecting it to the ring vertices in a fan pattern:

private generateSingleTubeCaps(gridX: number, gridY: number, startVertexIndex: number, pathLength: number, circleSegments: number): void {
    const tubeVertexCount = pathLength * circleSegments;

    // START CAP (at t = 0)
    const startCapIndex = this.meshBuilder.getVerticesCount();
    this.meshBuilder.appendVerticesInterleaved([
        0.0, 0.0, 0.0,         // position: unused, unused, t=0
        0.0, 0.0, 0.0,         // normal: unused, unused, isCap=0
        0.0, 0.0,              // texture0: center
        gridX, gridY           // texture1: grid indices
    ]);

    for (let i = 0; i < circleSegments; i++) {
        const current = startVertexIndex + i;
        const next = startVertexIndex + (i + 1) % circleSegments;
        this.meshBuilder.appendIndices([startCapIndex, next, current]);
    }

    // END CAP (at t = 1)
    const endCapIndex = this.meshBuilder.getVerticesCount();
    this.meshBuilder.appendVerticesInterleaved([
        0.0, 0.0, 1.0,         // position: unused, unused, t=1
        0.0, 0.0, 0.0,         // normal: unused, unused, isCap=0
        0.0, 0.0,              // texture0: center
        gridX, gridY           // texture1: grid indices
    ]);

    const lastRingStart = startVertexIndex + (pathLength - 1) * circleSegments;
    for (let i = 0; i < circleSegments; i++) {
        const current = lastRingStart + i;
        const next = lastRingStart + (i + 1) % circleSegments;
        this.meshBuilder.appendIndices([endCapIndex, current, next]);
    }
}

Deforming 3D lines has an added difficulty of preserving the volume of the tube. If we just offset point positions by the same vector, we end up with wrong endcaps.

Naive offset - rings stay horizontal
Naive offset
TNB frame - rings perpendicular to tangent
TNB frame

The solution is to compute a moving coordinate frame along the path. At each point, we calculate:

Each vertex is then positioned using: p = center + (cos θ · N + sin θ · B) · radius

Object to World Transform

A key node from the material editor to make everything work is the Object to World converter. This transforms our computed vertex positions from object space into world space, ensuring the deformed tubes render correctly regardless of the object’s transform. Since you’re here, I might as well recommend you another node: Hue/Saturation. Lens Studio renders low value colors as transparent, so by using this node you can adjust the value color channel independently. It’s also useful for making colors pop by raising saturation.

Material editor showing Transform Vector Object to World node
Animation
Manim Source
python
Loading...

I implemented a test example in Lens Studio and deployed it on Spectacles. Performance was satisfactory, so I continued with this approach.

Here’s the workflow for this test example in Lens Studio:

With tube generation and deformation working, we can now trace paths through a vector field.

Starting from sample points, we integrate by repeatedly querying the field at the current position and stepping in that direction.

Each step updates the tube’s frame, bending it along the flow.

Different field patterns produce vastly different flow lines.

3.1 Contraction

Vectors spiral inward toward a target point, creating sink-like behavior.

Field visualization showing contraction flow pattern
Manim Source
python
Loading...

Fig 3.1a: Field visualization showing contraction flow pattern

Fig 3.1b: Demo on Spectacles

3.2 Expansion

Radial waves emanate outward from the target with 3D oscillation perpendicular to the flow.

Field visualization showing expansion flow pattern
Manim Source
python
Loading...

Fig 3.2a: Field visualization showing expansion flow pattern

Fig 3.2b: Demo on Spectacles

3.3 Circulation

A 3D swirling vortex that mixes rotation in multiple planes around the target.

Field visualization showing circulation flow pattern
Manim Source
python
Loading...

Fig 3.3a: Field visualization showing circulation flow pattern

Fig 3.3b: Demo on Spectacles

3.4 Vortex

Rotating cellular patterns with an added spin component based on angular position.

Field visualization showing vortex flow pattern
Manim Source
python
Loading...

Fig 3.4a: Field visualization showing vortex flow pattern

Fig 3.4b: Demo on Spectacles

3.5 Waves

Sinusoidal interference patterns where each axis oscillates based on the other two coordinates.

Field visualization showing wave interference pattern
Manim Source
python
Loading...

Fig 3.5a: Field visualization showing wave interference pattern

Fig 3.5b: Demo on Spectacles

3.6 Implementation Workflow

The complete workflow connects a TypeScript component that generates procedural tube geometry with a custom shader that integrates the vector field on the GPU:

The field patterns above are mathematical abstractions. For something more concrete, we can model a magnetic dipole field, the kind you’ve probably seen with iron filings around bar magnets.

Math: Dipole Field & Integration

Each dipole creates a vector field based on its magnetic moment m\mathbf{m} and the displacement r\mathbf{r} from the dipole:

B(r)=3(mr^)r^mr3\mathbf{B}(\mathbf{r}) = \frac{3(\mathbf{m} \cdot \hat{\mathbf{r}})\hat{\mathbf{r}} - \mathbf{m}}{r^3}

The field magnitude falls off as the inverse cube of distance: B1r3|\mathbf{B}| \propto \frac{1}{r^3}.

To trace flow lines through this field, we use Euler integration: starting from a sample point p0\mathbf{p}_0, we repeatedly query the field and step in its direction:

pn+1=pn+B^(pn)Δs\mathbf{p}_{n+1} = \mathbf{p}_n + \hat{\mathbf{B}}(\mathbf{p}_n) \cdot \Delta s

where B^\hat{\mathbf{B}} is the normalized field direction and Δs\Delta s is the step size. Combining two dipoles produces the characteristic looping field lines.

GPU Implementation

The dipole formula translates directly to GPU code:

vec3 dipoleMagneticField(vec3 point, vec3 dipolePos, vec3 moment) {
    vec3 r = point - dipolePos;
    float dist = length(r);

    if (dist < 0.1) {
        return moment * FieldStrength * 2.0;  // Inside dipole
    } else {
        vec3 rHat = r / dist;                           // r̂ = r/|r|
        float dist3 = dist * dist * dist;               // r³
        float mDotR = dot(moment, rHat);                // m · r̂
        vec3 B = (3.0 * mDotR * rHat - moment) / dist3; // B = (3(m·r̂)r̂ - m) / r³
        return B * FieldStrength;
    }
}

Euler integration runs in the vertex shader, stepping each vertex along the field:

for (int i = 0; i < 64; i++) {
    if (i >= clampedStepIndex) break;
    prevPos = pos;
    pos += getMagneticField(pos) * StepSize;  // p_{n+1} = p_n + B(p_n) · Δs
}
Magnetic field visualization showing dipole field computation and integration
Manim Source
python
Loading...

Fig 4.1: Magnetic field visualization showing dipole field computation and integration

Fig 4.2: Demo on Spectacles with interactive magnet positioning

+Magnet Physics Script

A separate script applies physical interactions between the two magnet scene objects.

Here are some of its key sections:

Pole alignment that determines whether magnets attract or repel:

// Returns alignment: negative = attracting, positive = repelling
private computeAlignment(): number {
    const direction = delta.normalize();
    const m1 = this.getForwardVector(this.magnet1);  // +X points from S to N pole
    const m2 = this.getForwardVector(this.magnet2);

    const m1FacingM2 = m1.dot(direction);            // Is magnet1's north pointing toward magnet2?
    const m2FacingM1 = m2.dot(direction.uniformScale(-1)); // Is magnet2's north pointing toward magnet1?
    return m1FacingM2 * m2FacingM1;                  // Negative when opposite poles face each other
}

Force computation with inverse cube falloff (matching the dipole field formula) and reference distance normalization for intuitive parameter tuning:

const effectiveDistance = Math.max(distance, this.minDistance);
// Normalize by reference distance (≈ magnet diameter) so force = 1 at that distance
const normalizedDist = effectiveDistance / this.referenceDistance;
const distanceFactor = normalizedDist * normalizedDist * normalizedDist; // r³ falloff

// Boost attraction at close range for satisfying "snap" behavior
const isAttracting = alignment < 0;
const proximityFactor = Math.max(0, 1.0 - effectiveDistance / 2.0);
const closeRangeBoost = isAttracting ? (1.0 + 8.0 * proximityFactor * proximityFactor) : 1.0;

const forceMagnitude = this.forceStrength * alignmentFactor * closeRangeBoost / distanceFactor;
return direction.uniformScale(forceMagnitude * alignment);

Sticking behavior where attracting magnets stick together and move as one until shaken apart:

if (attracting && distance < contactDist) {
    if (!this.isStuck) {
        this.stickOffset = pos2.sub(pos1);  // Record relative position
    }
    this.isStuck = true;

    // When stuck, moving one magnet moves both
    if (manipulating1 && !manipulating2) {
        this.setPosition(this.magnet2, pos1.add(this.stickOffset));
    } else if (manipulating2 && !manipulating1) {
        this.setPosition(this.magnet1, pos2.sub(this.stickOffset));
    }
}

Shake to separate that detects rapid hand acceleration to break stuck magnets apart:

private checkShakeSeparation(currentVel: vec3, lastVel: vec3, isManipulating: boolean): boolean {
    if (!isManipulating) return false;

    const acceleration = currentVel.sub(lastVel).length / dt;
    if (acceleration > this.shakeThreshold) {
        this.isStuck = false;
        // Apply impulse in opposite directions
        this.velocity1 = direction.uniformScale(-this.separationImpulse);
        this.velocity2 = direction.uniformScale(this.separationImpulse);
        return true;
    }
    return false;
}

4.1 Implementation Workflow

The magnetic field implementation uses the same tube mesh generation approach but with a physically-based dipole field formula in the shader:

4.2 Visualization Modes

This implementation supports three visualization modes that can be toggled at runtime:

Demo showing the three visualization modes: arrows, flow lines, and particles

To ensure smooth performance on Spectacles and avoid freezes from unintentionally high geometry counts, I’ve defined a vertex budget (32K vertices per mesh). All procedural geometry settings automatically adapt to fit within this budget.

I’ve also defined Level of Detail (LOD) presets accessible via the settings panel, controlling:

Among the three visualization modes, Trails is the most expensive due to many length segments per tube, while Particles uses minimal geometry (just 2 rings + caps per tube). Arrows mode has no flow animation overhead since it’s static.

To give users a visual hint of the active preset, I rendered a 2D version of each vector field with color-mapped direction and magnitude: hue encodes angle, saturation/brightness encodes strength.

Source

Saddle

Double Vortex

Examples of fields with color discontinuities at singular points

Getting smooth gradients required careful attention to continuity in the field functions, both spatially (no jumps in color) and temporally (smooth animation).

Contraction

Expansion

Circulation

Vortex

Waves

Magnetic

2D color-mapped visualizations of each vector field preset

Adding an alpha falloff:

Contraction

Expansion

Circulation

Vortex

Waves

Magnetic

Animated previews with radial alpha falloff

Finally, I packed all frames into a sprite sheet:

Contraction sprite sheet
Click to expand

Contraction

Expansion sprite sheet
Click to expand

Expansion

Circulation sprite sheet
Click to expand

Circulation

Vortex sprite sheet
Click to expand

Vortex

Waves sprite sheet
Click to expand

Waves

Magnetic sprite sheet
Click to expand

Magnetic

Click any sprite sheet to expand

Sprite Rendering Script
python
Loading...

The final shader samples the correct frame based on elapsed time, computing UV coordinates into the grid. It also supports smooth blending between presets.

Animated Sprite Sheet Shader
glsl
Loading...
Animated sprite sheet material graph

Material graph for the animated sprite sheet shader


Final Words

I hope this exploration of vector fields in AR encourages you to leverage them in your projects, or just make something fun with procedural geometry.

Motion is something I’m very passionate about. Having a framework to visualize it in real-time on AR glasses opens up possibilities I hadn’t considered before.

What I’m most excited about is using vector fields to manipulate data. Thanks to hand and body tracking input, I have a feeling that vector fields could become a full-fledged UI element category for XR applications.

Before getting into these considerations though, I’ll need to conclude my series on AR painting assistants, where I’ll use what I’ve learned from this project to push my experimentation with more fluid and cohesive UI elements.

If you want to try the project yourself, grab the source code or scan the Snapcode at the top to experience it on Spectacles.


Previous Post
Visualizing Color Spaces in Augmented Reality with Spectacles

Reply via email
View comments