Icon Animation Blend Spaces without Triangulation

 

Icon Quaternion Weighted Average

 

Icon BVHView

 

Icon Dead Blending Node in Unreal Engine

 

Icon Propagating Velocities through Animation Systems

 

Icon Cubic Interpolation of Quaternions

 

Icon Dead Blending

 

Icon Perfect Tracking with Springs

 

Icon Creating Looping Animations from Motion Capture

 

Icon My Favourite Things

 

Icon Inertialization Transition Cost

 

Icon Scalar Velocity

 

Icon Tags, Ranges and Masks

 

Icon Fitting Code Driven Displacement

 

Icon atoi and Trillions of Whales

 

Icon SuperTrack: Motion Tracking for Physically Simulated Characters using Supervised Learning

 

Icon Joint Limits

 

Icon Code vs Data Driven Displacement

 

Icon Exponential Map, Angle Axis, and Angular Velocity

 

Icon Encoding Events for Neural Networks

 

Icon Visualizing Rotation Spaces

 

Icon Spring-It-On: The Game Developer's Spring-Roll-Call

 

Icon Interviewing Advice from the Other Side of the Table

 

Icon Saguaro

 

Icon Learned Motion Matching

 

Icon Why Can't I Reproduce Their Results?

 

Icon Latinendian vs Arabendian

 

Icon Machine Learning, Kolmogorov Complexity, and Squishy Bunnies

 

Icon Subspace Neural Physics: Fast Data-Driven Interactive Simulation

 

Icon Software for Rent

 

Icon Naraleian Caterpillars

 

Icon The Scientific Method is a Virus

 

Icon Local Minima, Saddle Points, and Plateaus

 

Icon Robust Solving of Optical Motion Capture Data by Denoising

 

Icon Simple Concurrency in Python

 

Icon The Software Thief

 

Icon ASCII : A Love Letter

 

Icon My Neural Network isn't working! What should I do?

 

Icon Phase-Functioned Neural Networks for Character Control

 

Icon 17 Line Markov Chain

 

Icon 14 Character Random Number Generator

 

Icon Simple Two Joint IK

 

Icon Generating Icons with Pixel Sorting

 

Icon Neural Network Ambient Occlusion

 

Icon Three Short Stories about the East Coast Main Line

 

Icon The New Alphabet

 

Icon "The Color Munifni Exists"

 

Icon A Deep Learning Framework For Character Motion Synthesis and Editing

 

Icon The Halting Problem and The Moral Arbitrator

 

Icon The Witness

 

Icon Four Seasons Crisp Omelette

 

Icon At the Bottom of the Elevator

 

Icon Tracing Functions in Python

 

Icon Still Things and Moving Things

 

Icon water.cpp

 

Icon Making Poetry in Piet

 

Icon Learning Motion Manifolds with Convolutional Autoencoders

 

Icon Learning an Inverse Rig Mapping for Character Animation

 

Icon Infinity Doesn't Exist

 

Icon Polyconf

 

Icon Raleigh

 

Icon The Skagerrak

 

Icon Printing a Stack Trace with MinGW

 

Icon The Border Pines

 

Icon You could have invented Parser Combinators

 

Icon Ready for the Fight

 

Icon Earthbound

 

Icon Turing Drawings

 

Icon Lost Child Announcement

 

Icon Shelter

 

Icon Data Science, how hard can it be?

 

Icon Denki Furo

 

Icon In Defence of the Unitype

 

Icon Maya Velocity Node

 

Icon Sandy Denny

 

Icon What type of Machine is the C Preprocessor?

 

Icon Which AI is more human?

 

Icon Gone Home

 

Icon Thoughts on Japan

 

Icon Can Computers Think?

 

Icon Counting Sheep & Infinity

 

Icon How Nature Builds Computers

 

Icon Painkillers

 

Icon Correct Box Sphere Intersection

 

Icon Avoiding Shader Conditionals

 

Icon Writing Portable OpenGL

 

Icon The Only Cable Car in Ireland

 

Icon Is the C Preprocessor Turing Complete?

 

Icon The aesthetics of code

 

Icon Issues with SDL on iOS and Android

 

Icon How I learned to stop worrying and love statistics

 

Icon PyMark

 

Icon AutoC Tools

 

Icon Scripting xNormal with Python

 

Icon Six Myths About Ray Tracing

 

Icon The Web Giants Will Fall

 

Icon PyAutoC

 

Icon The Pirate Song

 

Icon Dear Esther

 

Icon Unsharp Anti Aliasing

 

Icon The First Boy

 

Icon Parallel programming isn't hard, optimisation is.

 

Icon Skyrim

 

Icon Recognizing a language is solving a problem

 

Icon Could an animal learn to program?

 

Icon RAGE

 

Icon Pure Depth SSAO

 

Icon Synchronized in Python

 

Icon 3d Printing

 

Icon Real Time Graphics is Virtual Reality

 

Icon Painting Style Renderer

 

Icon A very hard problem

 

Icon Indie Development vs Modding

 

Icon Corange

 

Icon 3ds Max PLY Exporter

 

Icon A Case for the Technical Artist

 

Icon Enums

 

Icon Scorpions have won evolution

 

Icon Dirt and Ashes

 

Icon Lazy Python

 

Icon Subdivision Modelling

 

Icon The Owl

 

Icon Mouse Traps

 

Icon Updated Art Reel

 

Icon Tech Reel

 

Icon Graphics Aren't the Enemy

 

Icon On Being A Games Artist

 

Icon The Bluebird

 

Icon Everything2

 

Icon Duck Engine

 

Icon Boarding Preview

 

Icon Sailing Preview

 

Icon Exodus Village Flyover

 

Icon Art Reel

 

Icon LOL I DREW THIS DRAGON

 

Icon One Cat Just Leads To Another

What type of Machine is the C Preprocessor?

Created on Jan. 27, 2014, 7:14 p.m.

About a year ago I wrote an implementation of Brainfuck in the C Preprocessor.

This opened up discussion again as to if the C Preprocessor is Turing complete. The main argument against Turing completeness was that my implementation did not support unbounded recursion. It was therefore quite possible to write a computation that my implementation could not simulate; any computation that was intended to run indefinitely.

But more precisely people complained that, due to this, my implementation did not support an infinite tape, and so was not a Turing Complete.

This seems intuitive. The fact that the C preprocessor could not infinitely recurse, seems like it must be tied to the infinite tape of the Turing Machine. But this was also a little confusing. I hadn't thought about it at the time, but my Brainfuck implementation did have an infinite tape. We can see this by looking at how the basic operators were defined.

#define LIST(...) (__VA_ARGS__)

Memory was stored as a tuple. Essentially a whole list of tokens that look like arguments to a function. Because this list looked like arguments to a function (or macro), we could use the variable arguments feature of macros to define our basic tuple operations.

But first we need a little bit of groundwork. We start by defining a function EVAL, which runs another preprocessor scan on its arguments, and evaluates them as a whole. This we use to evaluate a expression when it consists of a macro name next to its arguments.

#define EVAL(...) __VA_ARGS__

Then we can define the CURRY function, which can be used to apply arguments to a function as a tuple. This is done by placing the macro name next to the arguments, and evaluating.

#define CURRY(M, L) EVAL(M L)

Finally this can be used to define our familiar HEAD and TAIL functions which take the first element of a tuple, or the rest.

#define ARGS_HEAD(X, ...) X
#define ARGS_TAIL(X, ...) (__VA_ARGS__)
#define HEAD(L) CURRY(ARGS_HEAD, L)
#define TAIL(L) CURRY(ARGS_TAIL, L)

We can also use CURRY to create a CONS function, for appending an element to the front a list.

#define CONS(X, L) (X, CURRY(EVAL, L))

And that's all there is to it. It should be fairly obvious that at no point hard limits or bounds are introduced. Providing we can infinitely apply CONS to a tuple, our memory is infinite too.

But this memory is different to that of a Turing machine. It is somewhat lazy. With this memory if we go over the edge of the tape we need to generate new cells on the fly. We imagine Turing Machine memory having an infinite tape which is just there, not having a tape that must be generated as the head moves. Perhaps this is the reason why the preprocessor appears to lacks an infinite tape.

But, while Turing Machine memory isn't generated on the fly, it isn't exactly random access. To access the memory at some location we have to move the head a number of times in that direction. Memory access for a Turing machine can still take a number of operations, proportional to various things. If stepping along the tape and reading or writing some memory takes 1 or 10 logical operations it doesn't really matter. This online generation of memory is essentially implementation overhead. When it comes down to it I don't believe Turing Machine memory is really any different to our lazy memory.

So maybe it is because of the recursion limit. We know that the preprocessor cannot perform unbounded recursion. If we wished to access memory at location N, we might have to do f(N) operations to get there. And if f(N) is greater than our upper bound on recursion we can't do it. But something is odd about this too. It is more like we have an infinite tape, but we can't reach it all because of a limit on the number of iterations we can perform in one computation.

Just like a Turing Machine, we have an infinite tape, and a fixed number of states the system can be in; it should all be there.

What we are missing is the ability to re-enter old states.

To achieve recursion in the preprocessor each recursive function is augmented with a recursion depth variable $. This counts the number of times we have recursed, and is used to find a function we have not entered yet in the computational call graph. For a Turing machine this would be a bit like duplicating all the states S times, and then augmenting their identifier with an extra number s. For each transition from a state augmented with s, we change the transition to go to the state augmented with s+1.

This avoids any loops in the state graph, but it does create a troublesome situation when the limit S is exhausted. The machine has no state to enter, and must halt with no transitions.

So the C preprocessor is a Turing Machine who's transition table cannot contain loops, not one lacking an infinite tape.

If this machine has a name I'd love to know it, and what its equivalences are. So if you know anything about what this might be don't hesitate to get in contact.

github twitter rss