All credit for the source and/or referenced material goes to the named author(s).

Signals & Fine-Grained Reactivity

An Introduction to Fine-Grained Reactivity by Ryan Carniato

The Primitives

Summary

Signals

“The Observed”

AKA: Observables, Atoms, Subjects, or Refs.

Event emitters at heart - but the difference lies in the way that subscription is managed.

Examples

Function-based (Solid):

const [count, setCount] = createSignal(0);

// read a value
console.log(count()); // 0

// set a value
setCount(5);
console.log(count()); //5

Proxy-based (Vue):

// Vue
const count = ref(0);
// read a value
console.log(count.value); // 0

// set a value
count.value = 5;

Compiled (Svelte):

// Svelte
let count = 0;
// read a value
console.log(count); // 0

// set a value
count = 5;

Reactions (Effects)

“The Observer”

AKA: Effects, Autoruns, Watches, Computeds.

Wrapped function expresisons that run initially and then observe signals and re-run them every time their value updates.

Example

console.log("1. Create Signal");
const [count, setCount] = createSignal(0);

console.log("2. Create Reaction");
createEffect(() => console.log("The count is", count()));

console.log("3. Set count to 5");
setCount(5);

console.log("4. Set count to 10");
setCount(10);
1. Create Signal

2. Create Reaction

The count is 0

3. Set count to 5

The count is 5

4. Set count to 10

The count is 10

Feels like magic, but it isn’t! This is the reason that [[Fine-Grained_Reactivity_and_Signals#Signals|Signals]] need getters. Whenever the signal is executed, the wrapping function detects it and automatically subscribes to it.

Derivations (Memos)

AKA: Memos, Computeds, Pure Computeds.

For when we need to represent our data in different ways and use the same [[Fine-Grained_Reactivity_and_Signals#Signals|Signals]] in multiple [[Fine-Grained_Reactivity_and_Signals#Reactions (Effects)|Effects]].

Derivations serve to save work by caching the value in an independently executed expression that is trackable itself.

As they are derived, they are guaranteed to be in sync.

At any point, we can determine their dependencies and evaluate whether they could be stale.

Using [[Fine-Grained_Reactivity_and_Signals#Reactions (Effects)|Effects]] to write to other [[Fine-Grained_Reactivity_and_Signals#Signals|Signals]] might seem equivalent, but in fact it is not as it cannot bring that guarantee as those Effects are not an explicit dependency of the Signal (as Signals have no dependencies).

Example - Without Derivations

console.log("1. Create Signals");
const [firstName, setFirstName] = createSignal("John");
const [lastName, setLastName] = createSignal("Smith");

/**
	Note here that fullName is a function. This is because in order for the Signals to be read underneath the Effect we need to defer executing it until the Effect is running.

	If it wre simply a value there would be no opportunity to track, or for the Effect to re-run.
*/
const fullName = () => {
	console.log("Creating/Updating fullName");
	return `${firstName()} ${lastName()}`;
};

console.log("2. Create Reactions");
createEffect(() => console.log("My name is", fullName()));
createEffect(() => console.log("Your name is not", fullName()));

console.log("3. Set new firstName");
setFirstName("Jacob");
1. Create Signals

2. Create Reactions

Creating/Updating fullName

My name is John Smith

Creating/Updating fullName

Your name is not John Smith

3. Set new firstName

Creating/Updating fullName

My name is Jacob Smith

Creating/Updating fullName

Your name is not Jacob Smith

Sometimes though, the computational cost of our derived value is expensive. That’s the reason why we have a 3rd basic primitive, [[Fine-Grained_Reactivity_and_Signals#Derivations (Memos)|Derivations]].

They act similar to function memoization to store intermediate computations as their own signal.

Example - With Derivations

console.log("1. Create Signals");
const [firstName, setFirstName] = createSignal("John");
const [lastName, setLastName] = createSignal("Smith");

console.log("2. Create Derivation");
const fullName = createMemo(() => {
	console.log("Creating/Updating fullName");
	return `${firstName()} ${lastName()}`;
});

console.log("3. Create Reactions");
createEffect(() => console.log("My name is", fullName()));
createEffect(() => console.log("Your name is not", fullName()));

console.log("4. Set new firstName");
setFirstName("Jacob");
1. Create Signals

2. Create Derivation

Creating/Updating fullName

3. Create Reactions

My name is John Smith

Your name is not John Smith

4. Set new firstName

Creating/Updating fullName

My name is Jacob Smith

Your name is not Jacob Smith

Here, fullName calculates its value immediately on creation and doesn’t re-run its expression when read by [[Fine-Grained_Reactivity_and_Signals#Reactions (Effects)|Reactions]]. When its source [[Fine-Grained_Reactivity_and_Signals#Signals|Signal]] is updated, it does re-run but only once as that change propagates to the Reactions.

Reactive Lifecycle

Fine-Grained reactivity maintains the connection between many reactive nodes. At any given change, parts of the graph re-evaluate and can create and/or remove connections.


Frameworks using compilation, such as Marko and Svelte, don’t use the same runtime tracking technique as discussed below and instead statically analyze dependencies. In doing so, they have less control over when reactive expressions re-run - so they may end-up over-executing - but there is less overhead for management of subscriptions.


Example - Condition Change

Let’s consider, when a condition changes, what data we use to derive a value:

console.log("1. Create");

/* Signals */
const [firstName, setFirstName] = createSignal("John");
const [lastName, setLastName] = createSignal("Smith");
const [showFullName, setShowFullName] = createSignal(true);

/* Derivation */
const displayName = createMemo(() => {
	if (!showFullName()) return firstName();
	return `${firstName()} ${lastName()}`;
});

/* Reaction */
createEffect(() => console.log("My name is", displayName()));

console.log("2. Set showFullName: false ");
setShowFullName(false);

console.log("3. Change lastName");
setLastName("Legend");

console.log("4. Set showFullName: true");
setShowFullName(true);
1. Create

My name is John Smith

2. Set showFullName: false

My name is John

3. Change lastName

4. Set showFullName: true

My name is John Legend

The important thing here happens at step #3. When lastName changes, nothing gets logged. This is because we re-run a reactive expression, we rebuilt its dependencies. Simply, at the time we change the lastName, no one is listening to it.

This is because the memoization of displayName amount to the same value, and as such the Effects aren’t “notified”?

The value itself does change, as we confirm when we set showFullName back to true, but nothing gets notified from that change.

This is a safe interaction, as in ordder for lastName to become tracked again, showFullName must change, and showFullName is itself tracked.

Dependencies are the Signals that a reactive expression reads to generate its value. In turn, these Signals hold the subscription of many reactive expressions. When they update, they notify their subscribers who depends on them.

These subscriptions/dependencies are constructed on each execution, and released each time a reactive expression is re-run or when they are finally released.

Example - onCleanup Helper & the Subscription Cycle

We can see that timing using an onCleanup helper.

console.log("1. Create");
const [firstName, setFirstName] = createSignal("John");
const [lastName, setLastName] = createSignal("Smith");
const [showFullName, setShowFullName] = createSignal(true);

const displayName = createMemo(() => {
	console.log("### executing displayName");
	onCleanup(() => console.log("### releasing displayName dependencies"));
	if (!showFullName()) return firstName();
	return `${firstName()} ${lastName()}`;
});

createEffect(() => console.log("My name is", displayName()));

console.log("2. Set showFullName: false ");
setShowFullName(false);

console.log("3. Change lastName");
setLastName("Legend");

console.log("4. Set showFullName: true");
setShowFullName(true);
1. Create

### executing displayName

My name is John Smith

2. Set showFullName: false

### releasing displayName dependencies

### executing displayName

My name is John

3. Change lastName

4. Set showFullName: true

### releasing displayName dependencies

### executing displayName

My name is John Legend

Synchronous Execution

Fine-Grained reactive systems execute their changes synchronously and immediately.

They aim at being “glitch-free” in that it is never possible to observe an inconsistent state. This leads to predictability since for any given change, code only runs once.


Inconsistent state can lead to unintended behavior when we can’t trust what we observe to make decisions and perform operations.


Example - Simultaneous Changes

The easiest way to demonstrate how this works is by applying 2 changes simultaneously that feed into a Derivation that runs a Reaction.

Here, batch is a helper that wraps the update in a transaction that only applies changes when it finishes executing the expression - to help us demonstrate.

console.log("1. Create");
const [a, setA] = createSignal(1);
const [b, setB] = createSignal(2);

const c = createMemo(() => {
	console.log("### read c");
	return b() * 2;
});

createEffect(() => {
	console.log("### run reaction");
	console.log("The sum is", a() + c());
});

console.log("2. Apply changes");
batch(() => {
	setA(2);
	setB(3);
});
1. Create

### read c

### run reaction

The sum is 5

2. Apply changes

### run reaction

### read c

The sum is 8

Here the code runs top-down through creation, however the batched update reverses the run/read logs.

When we update the value even though A and B are applied at the same time, we need to start somewhere so we run A’s dependencies first.

So the effect runs first, but detecting that C is stale we immediately run it on read and everything executes once and evaluates correctly.

While we can probably think of an approach to solve this specific static case in order, we have to keep in mind that in reality dependencies can change on any run.

Fine-grained reactive libraries use a hybrid push/pull approach to maintain consistency. They are not purely “push” like events/streams, nor purely “pull” like generators.

Building a Reactive Library from Scratch by Ryan Cariato

Let’s look at building a reactive library ourselves!

Reactivity feels so “magical” because once put in place, it takes care of itself even under dyncamic scenarios. This is the benefit of true declarative approaches. The implementation doesn’t matter as long as the contract is kept.

Signals

Barebones Implementation

export function createSignal(value) {
	const read = () => value;
	const write = (nextValue) => (value = nextValue);

	return [read, write];
}

Example Usage

const [count, setCount] = createSignal(3);
console.log("Initial Read", count());

setCount(5);
console.log("Updated Read", count());

setCount(count() * 2);
console.log("Updated Read", count());
Initial Read 3

Updated Read 5

Updated Read 10

What’s missing here?… Signals are but event emitters.

Managing Subscription

const context = [];

function subscribe(running, subscriptions) {
	subscriptions.add(running);
	running.dependencies.add(subscriptions);
}

export function createSignal(value) {
	const subscriptions = new Set();

	const read = () => {
		const running = context[context.length - 1];

		if (running) {
			subscribe(running, subscriptions);
		}

		return value;
	};

	const write = (nextValue) => {
		value = nextValue;

		// Clone the collection of subscribers to avoid running subs added over the course of the current execution run.
		for (const sub of [...subscriptions]) {
			sub.execute();
		}
	};

	return [read, write];
}

When would read() ever be called if there isn’t a Reaction or a Derivation in the global context? How can running be falsy in read()?

Signals can be read without being wrapped in a Reaction or a Derivation, as seen in the top-most example. Reactions and Derivations serve to enable automatic dependency tracking when “behavior” or “functionality” is built on top of Signals.

There are two mains things we are managing here.

At the top, there is a global context which will be used to keep track of any running Reactions or Derivations.

In addition, each Signal has its own subscriptions list.

These two things serve as the basis of automatic dependency tracking. A Reaction or Derivation, on execution, pushes itself onto the context stack. It will be added to the subscriptions list of any Signal read during that execution. We also add the Signal to the running context to help with cleanup that will be covered in the next section.

Finally, on Signal write, in addition to updating the value, we execute all the subscriptions. We clone the list first so that new subscriptions added in the course of this execution do not affect the current run.

Reactions and Derivations

Let’s address the other half now!

Basic Reaction Implementation

function cleanup(running) {
	for (const dep of running.dependencies) {
		dep.delete(running);
	}

	running.dependencies.clear();
}

export function createEffect(fn) {
	const execute = () => {
		cleanup(running);
		context.push(running);

		try {
			fn();
		} finally {
			context.pop();
		}
	};

	const running = {
		execute,
		dependencies: new Set(),
	};

	execute();
}

What we create here is the object that we push on to context. It has a list of dependencies, which are Signals, that the Reaction listens to and the function expression that we track and re-run.

Every cycle, we unsubscribe the Reaction from all its Signals and clear the dependency list to start new.

This is why we stored the backlink. This allows to dynamically create dependencies as we run each time. Then, we push the Reaction on the stack and execute the user-supplied function.

Example - Signal & Reaction

console.log("1. Create Signal");
const [count, setCount] = createSignal(0);

console.log("2. Create Reaction");
createEffect(() => console.log("The count is", count()));

console.log("3. Set count to 5");
setCount(5);

console.log("4. Set count to 10");
setCount(10);
1. Create Signal

2. Create Reaction

The count is 0

3. Set count to 5

The count is 5

4. Set count to 10

The count is 10

Basic Derivation Implementation

Adding a simple Derivation isn’t much more involved. It uses mostly the same code from createEffect.

In a real reactive library like MobX, Vue, or Solid, we would build in a push/pull mechanism and trace the graph to make sure we weren’t doing extra work. See Becoming Fully Reactive: An In-Depth Explanation of MobX for more details.

export function createMemo(fn) {
	const [s, set] = createSignal();
	createEffect(() => set(fn()));

	return s;
}

Example - Conditional Rendering

console.log("1. Create");
const [firstName, setFirstName] = createSignal("John");
const [lastName, setLastName] = createSignal("Smith");
const [showFullName, setShowFullName] = createSignal(true);

const displayName = createMemo(() => {
	if (!showFullName()) return firstName();
	return `${firstName()} ${lastName()}`;
});

createEffect(() => console.log("My name is", displayName()));

console.log("2. Set showFullName: false ");
setShowFullName(false);

console.log("3. Change lastName");
setLastName("Legend");

console.log("4. Set showFullName: true");
setShowFullName(true);
1. Create

My name is John Smith

2. Set showFullName: false

My name is John

3. Change lastName

4. Set showFullName: true

My name is John Legend

As we can observe here, because we build the dependency graph each time, we don’t re-execute the Derivation on lastName update when we are not listening to it anymore.

Conclusion

Those are the basics!

Here are some things that could be added, should we want to make this “library” production-ready:

In its current state, it is not glitch-free, but it contains all of the core pieces. This is how libraries like Knockout worked in the early ’10s.

This was a primer on how auto-tracking in fine-grained reactive libraries work. For a more in-depth breakdown, see SolidJS: Reactivity to Rendering.

The Evolution of Signals in JavaScript by Ryan Carniato

The Early Days of Signals in JavaScript

Multiple early-days declarative frontend frameworks landed on Signals-like, with slightly different implementations, within a very short period of time:

These all influenced and shaped how we manage state and update the DOM today.

Knockout is of special importance, since their implementation were built directly on what we’ve come to know today as Signals, under the following names:

Emerging Patterns

From those next few years emerged Data Binding, including two-way.

Once this was identified as a footgun, leading to low predictability and consistency, and creating cycles in the update graphs causing things to update multiple times over, re-running side effects, and leading to massive duplicated processing and execution.

This is what lead to React, and its “rebuild the world” solution. By that time, the industry was ready and eager for an alternative. See Hacker Way: Rethinking Web App Development by Jing Chen for more details on the then-novel approach.

Glitch Free

Following React’s mass adoption, the desire for certain people who preferred reactive models led to the creation of MobX in 2015.

Beyond simply enabling reactive models in React, Mobx brought something more; an emphasis on consistency and glitch-free propagation. This means: for any given change, each part of the system would only run once and in proper order synchronously.

MobX traded the typical push-based reactivity found in its predecessor with a push-pull hybrid system. Notifications of changes are pushed out, but the execution of the derived state is deferred to where it was read. This was a monumental step forward in making these systems debuggable and consistent.

For more details on modern pull-based reactivity libraries and algorithms, see this explainer on Reactively.

Conquering Leaky Observers

Fine-Grained reactivity is a variation of GoF’s Observer Pattern, a pattern which has a classic problem: A Signal keeps a strong reference to its subscribers, so a long-lived Signal will retain all subscriptions unless manually disposed. This “bookkeeping” gets prohibitively complicated with significant use, especially where nesting is involved.

A small, independent, library, S.js, would present the answer in 2013. Modeled after digital circuits, where all state change works on clock cycles, it called its state primitive Signals. While not the first to use the name, its where the term we use today comes from.

More importantly though, it introduced the concept or reactive ownership. An owner would collect all child reactive scopes and manage thier disposal on the owner’s own disposal or were it ever to re-execute. The reactive graph would start wrapped in a root owner, and then each node would serve as an owner for its descendants. This owner pattern is not only useful for disposal, but as a mechanism to build Provider/Consumer context into the reactive graph.

Scheduling

Vue, in 2014, made its own significant contribution in that it has had fine-grained reactivity at tis core since its very beginning.

While Vue uses a VDOM, reactivity being first-class meant it developed along the framework, first as an internal mechanism to power its Option API, to new being front and center in the Composition API released in 2020.

Vue took the push/pull mechanism one step forward by scheduling when the work would be done. By default, all changes are collected but not processed until the effects queue is run on the next microtask.

Doesn’t that break the concept of “reactive programming” though? Isn’t the synchronicity of execution a core part of the concept that gets broken by putting a scheduler in the middle?

This scheduling could also be used to do things like keep-alive, that is preserving offscreen graphs without computational cost, and Suspense. Even things like concurrent rendering are possible with this approach, really showing how one could get the best of both worlds of pull and push-based approaches.

Compilation

In 2019, Svelte 3 showed all the power that a compiler brings. In fact, they compile away the reactivity completely.

The language of reactivity: state, derived state, and effect; gives us everything we need to describe synchronized systems and is analyzable.

We can know exactly what changes and where.

The potential for traceablity is profound. Marvin Hagemeisterm, of the Preact Core Team, says of it that “one of the main reason why signals-based approach is better than hooks is that it enables debugging insight which is not possible with hooks - like exactly showing you why a piece of state updated“.

If we know that at compile time, we can ship less JavaScript.

We can also be more liberal with our code loading. This is the fundation of resumability in Qwik and Marko.

Signals into the Future

What Signals offer is a language to describe state synchronization independent of any side effect you’d have it perform.

This is why it has, and is being, adopted by all of Vue, Solid, Preact, Qwik, Angular, and even by some of Rust’s WASM DOM frameworks such as Leptos and Sycamore.

It is even being considered by React to be used under the hood, which might be fitting as the VDOM for React was always just an implementation detail.

Signals and the language of reactivity seem to be where things are converging, but that wasn’t so obvious from its first outings into JavaScript. Maybe that’s because JS isnt’ the best language for it. Ryan goes so far as suggesting that most pain we feel in frontend framework design these days are language concerns…

Making the Case for Signals in JavaScript by Ryan Carniato

Signals are more than a performance enabler and more than DX improvement too. They’re about flipping the current paradigm.

That is, React’s paradigm: view = fn(state).

Certainly a very powerful mental model for thinking about UIs, but more so it represents an ideal. Something to strive for.

Reality is alot messier. The underlying DOM is persistent and mutable. Not only would naive re-rendering be prohibitively costly, but it would also fundamentally break the experience. (Inputs losing focus, animations, etc…)

At some point, we need to separate the implementation from the ideal to be able to talk about these things honestly. In that mindset, let’s look at Signals as they are and what they have to offer.

Decoupling Performance from Code Organization

![[Signals_Local_Just_as_Global_State.png]]

As illustrated here - with Signals state is independent of components.

function Counter() {
	//* Doesn't run on setCount execution...
	console.log("I log once");

	const [count, setCount] = createSignal(0);
	setInterval(() => setCount(count() + 1), 1000);

	return <div>{count()}</div>;
}

Similarly, a console.log that doesn’t re-execute when a counter updates is a cute trick but doesn’t tell the whole story.

The truth is, this behavior persists throughout the while component tree.

State that originates in a parent component and is used in a child doesn’t cause the parent or the child to re-run. Only the part of the DOM that depends on it. Prop drilling, Context API, or whatever it is, is the same thing.

And it isn’t just aobut the impact of spreading state changes across components, but also multiple states within the same component.

function MoreRealisticComponent(props) {
	const [selected, setSelected] = createSignal(null);

	return (
		<div>
			<p>Selected {selected() ? selected().name : "nothing"}</p>

			<ul>
				{props.items.map((item) => (
					<li>
						<button onClick={() => setSelected(item)}>
							{item.name}
						</button>
					</li>
				))}
			</ul>
		</div>
	);
}

With Signals, updating the selected state does not cause any execution other than changing the text in that <p>. There is no re-running the list or diffing it. That is even true if the name in one of the rows updates. With Signals, we can directly update the text of that one button.

Isn’t that specific to non-VDOM implementations though? Or even specifically to Solid? What of Vue that has both Signals through the Composition API and a VDOM? Putting MobX into React doesn’t prevent the fact that the component as a whole will “rerender”, as is dictated by the VDOM model.

We might be thinking “Okay, it’s fast. But I’ve never had trouble ensuring good performance in something like React”. But the takeaway is more than that.

We no longer need to be concerned with components for optimal execution.

A whole app can be packed in a single component, or split over multiple components, and we get the exact same benefits. We’re then free to break appart components for the sole benefit of code organization and clarity, instead of needing to split component to isolate UI changes, or to accomodate the framework’s implementation. This also means that if performance ever becomes an issue, it won’t be due to the components’ structure - an area for which refactoring are costly and complicated.

This, as a whole, is not an insignificant benefit to DX.

Separating Dynamic from Static

There have been some conversation, for instance this tweet by Dan Abramov or his longer form response to Ryan’s text on React’s relationship with Signals, to suggest that this is a bad thing.

Let’s talk why this is, in fact, an amazing thing.

The Tradeoffs

There is a tradeoff with using distributed event system like Signals vs something that runs top-down.

While updates will be quicker for the event system, at creation it has the additional overhead of setting up subscriptions.

This gets compounded by the fact that the web is a document-oriented interface. Even in SPAs, we will be doing a lot of navigation which involves a lot of creation.

The Optimization

Despite all that, the web platform is aware of this cost, and had made it more efficient to create elements in bulk than individually.

Extracting the static parts for mass creation makes a bigger impact than those subscriptions.

And the benefits don’t stop with the browser. With Signals-based system the complexity, size, and execution of our code scales with how interactive our UI is rather than how many elements are in it.

Sample Situation

Let’s consider a SSR’d page with few interactions. The static parts are server-rendered HTML. We don’t even need the code for that to make it interactive, so we can just delete the static parts from the bundle…

Admittledly, this is mostly a performance concern, but it is a very real one that many modern framework’s proposition aim at solving, such as Islands Architecture and RSCs.

Concluding the Defense

Ryan’s position is that, overall, this separation leads to a certain amount of transparency. It makes it easier to explain and reason about what is actually going on.

While less simple than ideal, it makes escape hatches more coherent.

Universalizing the Language of UI

One of the most powerful things about Signals is their impact as a language.

We’re not talking about “compiled language” here. Signals are completly a runtime mechanism. No magic, just a Directed Acyclic Graph.

While there is convergence in the concepts surrounding State, Derived State, and Effects, not all the mental models, and certainly the implementations, line-up.

Signals are independent from component or rendering system, and only represent state relationships.

Unlike something like React’s Hooks, which have additional primitives to describe how to guard execution like useCallback, React.memo, and concepts like stable references useRef to handle the coordination of effects. See more about this from Dan Abramov’s blog, Overreacted, with Making setInterval Declarative with React Hooks and with Before you memo().

Additionally, Signals lend to traceability. They give us a way of understanding what updates, and why.

They ecourage declarative code; by making organizing code arond data instead of component flow we can see what data is driving change.

What about the Tradeoffs?

The most obvious one is that it makes the data itself special, instead of the application of that data. We aren’t dealing with plain objects anymore, but with primitives. This is very similar to Promises or Event Emitters.

We are reasoning about the data flow rather than the control flow.

JS is not a data flow language, so its possible to lose Reactivity. While this is true of any JS UI framework without the aid of tooling of compilation, it is emphasized for Signals as where you access the value is important.

Ryan calls this the Signal’s Hook Rule. There are consequences to this, as well as a learning curve, and it pushed us to write code a certain way.

When using things like Proxies, there are additional caveats like certain mechanisms in JS (like spreading, destructuring) that have restricted usage.

Strongly hints at the fact that LWC, even though compiled, are using Proxy-based runtime reactivity.

Another consideration is disposal. Subscriptions link both ways so if one side is long-lived, it is possible to hold onto memory longer than desired. Modern frameworks are pretty good at handling disposal efficiently and automatically, but this is inherent to Signals’ design.

Finally, historically there’s been concerns about large uncontrollable graphs, which would include cycles and unpredictable propagation. Due to the work that’s been done over the years, these concerns are a thing of the past, and Ryan goes to suggest that these specific concerns are exactly what Signals are meant to address nowadays.

Conclusion

When you are building with a foundation of primitives, there is a lot you can do.

The exploration into reducing JS load overhead and incremental interactivity is an area that Signals naturally fits in.

And to use Signals to great benefits, you do not need a compiler, not even for templating! We tend to use compilation to make ergonomics smoother, and Signals are also a great choice for compilation, but it certainly isn’t mendatory.

Compilers and language exploration become that much easier when we have efficient building blocks that we can target.

Whether Signals are best suited to be held by the developers or to be low-level primitives for machines, they appear to be an important step in the ever-evolving world of web frontend.

SolidJS: Reactivity to Rendering with Ryan Carniato

SolidJS is a UI rendering library built completely on top of a reactive system. The renderer, the components, and every other aspects of how the library works.

This approach is not only extremely performant, but also leads to really powerful composition patterns. Each reactive primitive is atomic and composable, and is only accountable to the reactive life-cycle.

This means no “Hook Rules”, no this bindings, and no considerations around stale closures.

But it is often unclear how to move from toy examples to an actual implementation. To that end, this article how we can build a whole renderer with nothing more than a reactive system. How can we go from the below demo, to a full-featured library like Solid.

const Greeting = (props) => (
	<>Hi <span>{props.name}</span></>
);

const App(() => {
	const [visible, setVisible] = createSignal(false);
	const [name, setName] = createSignal("Josephine");

	return (
		<div onClick={() => setName("Geraldine")}>
			{ visible() && <Greeting name={name} /> }
		</div>
	);
});

render(App, document.body);

Reactive Effects

Reactivity itself is not a system or a solution, but rather a means to modellling a problem. Many problems can be solved with reactivity, but it has its pros and cons. There are no silver bulltets here. Reactivity has real performance cost at creation time, and it has some pitfalls such as cascading updates.

Let’s look at a simple example in Solid’s syntax:

const [name, setName] = createSignal("John");

createEffect(() => console.log(`Hi ${name}`));

// Prints "Hi John"

setName("Julia");

// Prints "Hi Julia"

setName("Janice");

// Prints "Hi Janice"

Here we create a simple reactive atom, a Signal, with the value “John”. We then create a side-effect-producing computatio that tracks whenever name updates and logs to the console. At the time we set a new name value, that effect re-runs.

In that same vein, rendering to the DOM can be seen as just one more side-effect:

const [name, setName] = createSignal("John");

const el = document.createElement("div");
createEffect(() => (el.textContent = `Hi ${name()}`));

// <div>Hi John</div>

setName("Julia");

// <div>Hi Julia</div>

setName("Janice");

// <div>Hi Janice</div>

In some ways, that’s the whole story. We created a DOM element and wire the updates. If we wanted to update an attribute or a class, we could do something very similar.

const [selected, setSelected] = createSignal(false);

const el = document.createElement("div");
createEffect(() => (el.className = selected() ? "selected" : ""));

// <div></div>

setSelected(true);

// <div class="selected"></div>

Now this specific experience wouldn’t work particularly well for actual applications. There are quite a few more things to address before we can call this a renderer.

Composition

This can’t really scale if we can’t modularize the approach. While we create DOM elements and effects to update these elements, eventually we are going to hit a point where we need to conditionally append or remove elements.

const [visible, setVisible] = createSignal(false);

const el = document.createElement("div");
createEffect(() => {
	if (visible()) {
		const text = document.createTextNode("Hi ");
		const el2 = document.createElement("span");

		el2.textContent = "Joseph";
		el.appendChild(text);
		el.appendChild(el2);
	} else {
		el.textContent = "";
	}
});

// <div></div>

setVisible(true);

// <div>Hi <span>Joseph</span></div>

setVisible(false);

// <div></div>

We can even abstract that out to a function - a component of sorts.

function Greeting(props) {
	const text = document.createTextNode("Hi ");
	const el = document.createElement("span");

	el.textContent = props.name;

	return [text, el]; // A fragment...
}

const [visible, setVisible] = createSignal(false);

const el = document.createElement("div");
createEffect(() => {
	if (visible()) {
		el.append(...Greeting({ name: "Joseph" }));
	} else {
		el.textContent = "";
	}
});

// <div></div>

setVisible(true);

// <div>Hi <span>Joseph</span></div>

setVisible(false);

// <div></div>

And this brings us to our first challenge. What if we wish the anme to be changed dynamically?

Well, we need to make the name into a Signal so that we can track the change. But this has some repercussions when the greeting is visible. Simply tracking and updating will trigger the whole effect to re-run, which in turn will recreate the component and append the nodes again. We need to avoid this.

Where a VDOM library, like Vue, could just recreate the virtual representation and diff it at will, we on the other hand have a real cost here of creating the DOM nodes. While we could always just replace the content on update, this would be very expensive comparatively.

Libraries that leverage compilation, like Svelte, handle this by basically outputting two functions for every compiled component. A “create” path, and an “update” path. So on create it runs the initial code, and whenever the reactive system triggers it runs the update path instead.

This compiled approach can work well, but it requires more consideration around components since when executed, a child component is either created, marked for update due to prop changes, or left as-is. This is because dynamic children’s creation code execution may still fall under their parent update path.

Alternatively, the easiest way to solve this issue, which many reactive systems support naturally, is to nest effects. Since the reactive scope is more or less a stack, it is only the currently running computation that is actually tracking. So we could update our component to:

function Greeting(props) {
	const text = document.createTextNode("Hi ");
	const el = document.createElement("span");

	// Nested effect based on the name props which is now itself a Signal.
	createEffect(() => (el.textContent = props.name()));

	return [text, el]; // A fragment...
}

This does have one “gotcha”. The observer pattern, as used by these reactive libraries, has the potential to produce memory leaks. Computations that subscribe to Signals that outlive them are never released as long as the SIgnal is still in use. Whenever the Signal updates, these Computations will execute again, even if not referenced anywhere.

This also has the downside of keeping old DOM element references in closures when it comes to DOM side effects, so we need to manage their disposal. Luckily, this isn’t the hardest problem to solve.

Reactive Roots

If we think about it, every time the parent effect re-runs, we will be re-creating everything created during that function’s execution.

This means that, on creation, we can register all Computations created under that scope the same way we track dependencies.

Then, on re-running or disposal, in the same way we unsubscribe from all dependencies, we dispose of those computations as well.

We can do this mostly transparently from the end consumer as long as we have a way to gather top-level Computations. For this,we need our application to be run within a reactive root:

function Greeting(props) {
	const text = document.createTextNode("Hi ");
	const el = document.createElement("span");

	createEffect(() => (el.textContent = props.name()));

	return [text, el]; // A fragment...
}

const rendered = createRoot(() => {
	const [visible, setVisible] = createSignal(false);
	const [name, setName] = createSignal("Josephine");

	const el = document.createElement("div");

	createEffect(() => {
		if (visible()) {
			el.append(...Greeting({ name }));
		} else {
			el.textContent = "";
		}
	});

	return el;
});

document.body.appendChild(rendered);

Roots also give us the ability to arbitrarily control disposal by injecting themselves as owner.

For Solid, the dispose method is an optional parameter of the createRoot function. This can be useful with more complicated memoization.

let dispose = [];
let mapped = [];
let prevList = [];

onCleanup(() => {
	for (const d of dispose) {
		d();
	}
});

let parent = document.createElement("div");

createEffect(() => {
	const list = signal();

	//! Code here is invalid in the example itself...
	const nextDispose;
	const nextMapped;

	for (const [index, item] of list.entries()) {
		const prevIndex = prevList.findIndex(item);

		if (prevIndex > -1) {
			nextMapped[index] = mapped[prevIndex];
			nextDispose[index] = dispose[prevIndex];
			dispose[prevIndex] = null;
		} else {
			createRoot(disposer => {
				dispose[index] = disposer;
				nextMapped[index] = createFn(item);
			});
		}
	}

	// cleanup unused nodes skipping holes
	for (const d of dispose) {
		d && d();
	}

	dispose = nextDispose;
	mapped = nextMapped;
	prevList = list;

	// naive replace
	parent.textContent = "";
	parent.append(...mapped);
})

Beyond the code being straight-up invalid on account of uninitialized const declarations, this code is mostly indecipherable…

Above is a very naive implementation of a reactive map; one that we might use to map over a list of items and turn them into DOM nodes in a view. This effect runs over and over whenever the list changes, but it is careful not to recreate DOM nodes that have been created in previous runs.

Usually, re-running the effect would release all child computations, but because each is created in its own root, we manually control the disposal and only dispose of rows that were removed.

In addition, this example introduces onCleanup, a method to schedule disposal when the parent is disposed of or re-runs. This small tie-in to the reactive execution life-cycle gives us the final piece to manage other side-effects of the reactive system that live outside of the core rendering.

At this point we have most of the tools we need to efficiently render our views. We can:

However, there are still improvements that can be made to enhance performance and experience.

Reactive Memoization

Derivations are common in reactive libraries as they give us the ability to automatically derive a value from other Signals. In many libraries these are called “computeds”, since they are a pure computation that returns a new value.

But in the context of nested rendering, we can view them a bit differently. Upon executing, when re-evaluating an effect, these functions don’t re-run. Instead, they just return the cached value from their previous run. This is why, in Solid, Ryan refers to them as memos.

While they are mostly unnecessary from the perspective that if they are being read from an effect anyway, there is no need to wrap in an additional reactive primitive, they just let us do expensive work once. This is great for things like DOM or component creation.

function MyList() {
	const [list, setList] = createSignal(["Anita", "Andrew", "A.J."]);
	const [visible, setVisible] = createSignal(false);
	const nodes = createMemo(
		map(list, (item) => {
			const li = document.createElement("li");
			li.textContent = item;

			return li;
		})
	);

	const el = document.createElement("ul");
	createEffect(() => {
		if (visible()) {
			el.append(...nodes());
		} else {
			el.textContent = "";
		}
	});

	return el;
}

Let’s imagine map is a function similar to the last example of the previous section that reactively mapped a list of DOM nodes, but instead of appending them, it returns those nodes in a function call.

Without the createMemo, every time visible’s value changes to true, we’d be re-running the function. While it might not find any differences, and thus not create any new DOM nodes, it would still iterate over that list and do all the lookups and comparisons.

Essentially the equivalent of a VDOM-based framework’s “render” phase?

Now, instead, whenever visible changes to true and nodes is called, it just returns the results of the last run. It is only when list changes that the more expensive routine runs again.

Returning to our original example, let’s consider what happens if we use a condition instead of a simple boolean:

const rendered = createRoot(() => {
	const [count, setCount] = createSignal(0);
	const [name, setName] = createSignal("Josephine");

	const el = document.createElement("div");

	createEffect(() => {
		if (count() > 5) {
			el.append(...Greeting({ name }));
		} else {
			el.textContent = "";
		}
	});

	return el;
});

document.body.appendChild(rendered);

Every time count changes, we re-run the effect. When its under 6, we aren’t doing much, but afterwards we’ll keep on recreating the child component and those DOM nodes.

A more interesting use of memos are when configured to only notify when their value changes - they then can be used in the exact opposite way. They serve as a powerful tool to isolate cheaper calculations that are nested inside more expensive computations that doesn’t wish to re-run unless things have actually changed.

const rendered = createRoot(() => {
	const [count, setCount] = createSignal(0);
	const [name, setName] = createSignal("Josephine");

	// memo with equality comparator
	const visible = createMemo(
		() => count() > 5,
		undefined,
		(a, b) => a === b
	);

	const el = document.createElement("div");

	createEffect(() => {
		if (visible()) {
			el.append(...Greeting({ name }));
		} else {
			el.textContent = "";
		}
	});

	return el;
});

document.body.appendChild(rendered);

This more or less gets us back to the original behavior where only when count passes the threshold, and the result changes from false to true, or vice-versa, do we re-run our effect.

Components

So what is a Component in a system like this?

Well, we’ve already seen them! Simply a function.

This pattern of composing reactive primitives in the same way one composes Hooks is all we really need. onCleanup gives us the ability to handle life cycles.

All a component is, is a factory function that generates DOM nodes that are tied to state through function closures or effectful functions. But there are a few other considerations here…

Reactive Isolation

When we first looked at making our Greeting component update its name dynamically, we thought about doing the following, but it had the side effect of recreating our component each time:

function Greeting(props) {
	const text = document.createTextNode("Hi ");
	const el = document.createElement("span");

	el.textContent = props.name(); // reactive access will be tracked upstream.

	return [text, el]; // A fragment.
}

We should consider protecting against that. Most reactive libraries have an ignore or untracked function. In Solid, its called sample.

This function creates a new scope where reactive Signals are not tracked.

We use this as a way to ensure access outside of our effects and memos do not trigger upstream re-rendering.

Wrapping our components in sample is definitely a prudent precaution. It also let’s us safely access reactive variables not under an effect if we desire them to intentionally not be dynamic.

Not sure of applicability to any given situation here?

Doesn’t this break the idea that reactivity and automatic tracking are implementation details of the reactive system for which a developer never has to think about? Isn’t that like tailoring components boundaries to the renderer like React imposes?

Universal Props

What if the consumer of our Greeting component doesn’t have a need for a dynamic name, and just passes a plain string? We don’t want to be checking everywhere whether props passed are a function or not. What if we wanted to use more modern accessors, like Proxies?

One approach that many libraries do is encourage checking with an isObservable function, but this still requires consideration.

An approach that wouldn’t require the component author to worry about this at all would be to regulate the props object itself.

Simply mapping wrapped functions to getters on the props allows universal access. Let’s consider:

const props1 = {
	name: "Jacob",
};

const [name, setName] = createSignal("Jacob");
const props2 = {
	get name() {
		return name();
	},
};

function Greeting(props) {
	const text = document.createTextNode("Hi ");
	const el = document.createElement("span");

	createEffect(() => (el.textContent = props.name));

	return [text, el]; // A fragment.
}

Greeting(props1);

// <div>Hi <span>Jacob</span></div>

Greeting(props2);

// <div>Hi <span>Jacob</span></div>

The component writer decides if props.name is to be used dynamically, but accesses it the same way. The consumer passes in props in a consistent way.

Now we could think that we could avoid creating that effect altogether if we know the props isn’t dynamic, bu we can also tell that when no subscriptions are made after first execution. This effect can never update, so it can be removed.

Still, wrapping seems like quite a bit of work. But, we can accomplish that (and sample) with a helper. Either explicitly, or through detecting functions, we can transform our props and call our component as desired.

function dynamicProperty(props, key) {
	const src = props[key];

	Object.defineProperty(props, key, {
		get() {
			return src();
		},
		enumerable: true,
	});
}

function createComponent(Comp, props, dynamicKeys) {
	if (dynamicKeys) {
		for (let i = 0; i < dynamicKeys.length; i++) {
			dynamicProperty(props, dynamicKeys[i]);
		}
	}

	return sample(() => Comp(props));
}

Dynamic Components

If our pattern is to create real DOM nodes and effects and return those nodes, one might wonder how do we ever return something that can change without having access to the parent?

Like any runtime-function-based creation method, like HyperScript, or React.createElement, things get executed inside out. Or, in other words, we generally finish creating the children before the parent.

The answer to this, like everything else as we will soon see, is to lazy evaluate. Simply returning a function gives control back to the parent as to when to create, which is incredibly powerful.

// conditional component that renders props.children
// when props.test === true
function iff(props) {
	const cond = createMemo(
		() => props.test,
		undefined,
		(a, b) => a === b
	);

	return () => (cond() ? props.children : undefined);
}

iff({ test: () => count() > 5, children: () => Greeting({ name }) });

Of course, this means that our el.append won’t hold up any longer, so let’s look at how we put this all together.

Templating

Right about now, we actually have pretty much all we need to wire up some performant reactive views by hand, but its still alot of work…

At this point, we could probably just use plain old JS and wire up these examples easy enough.

So the final piece is templating to make our lives easier, so we don’t have to manually write all this code. There are a few options:

  1. We can wrap all element and Component creation in a HyperScript’s h function and it can determine from the input what code path in web a web of iteration and conditionals to run, for a purely runtime approach.
  2. We can, at runtime, analyze a string or Tagged Template Literal to use dynamic code generation to create code that resembles the examples above.
  3. We can use a Custom Parser or JSX templates at compile time to generate code similar to what we’ve seen so far.

Solid supports all 3, but they each have their own tradeoffs.

The first is definitely the simplest, but will always be worse than other optimized runtime-only approaches since we end up doing all the same things, but pay higher creation cost. Nothing can be inferred from this approach as we only realize structure as functions execute. Also, since its only JS, we end up having to write more ourselves.

The second always has feasible limitations. Using strings, we have a restrictive DSL, especially for expressions, unless we are bringing in our own sophisticated parser - which in turn costs bytes. Tagged Template Literals put expression execution out in the open, so we still have to be careful to wrap our own expressions.

For this reason, a custom DSL, or JSX, is highly desireable because through analysis, we can generate almost verbatim the code in these examples. We can also automatically handle identifying and wrapping dynamic expressions, and we can detect which code is used, to selectively import it - effectively enabling tree-shaking. This approach is both the smallest and the fastest.

But we’re not going to walk through creating a Babel plugin here. Instead, we’ll look at the last few helpers necessary to support all these approaches.

Insert

The first is how we insert content. As mentioned element.append won’t hold up anymore. Things are definitely more complicated when there are ranges under the same parent, but we’ll keep the code examples simple here.

We can insert text, a node, a function, or an array of those. Text and Nodes are pretty simple. We can just replace what is there with the new value.

function insert(parent, value, current) {
	if (value === current) {
		return current;
	}

	const t = typeof value;

	if (t === "string" || t === "number") {
		if (t === "number") {
			value = value.toString();
		}

		current = parent.textContent = value;

		/* ... Handle functions and arrays ... */
	} else if (value instanceof Node) {
		if (Array.isArray(current)) {
			parent.textContent = "";
			parent.appendChild(value);
		} else if (current == null || current === "") {
			parent.appendChild(value);
		} else {
			parent.replaceChild(value, parent.firstChild);
		}

		current = value;
	} else {
		console.warn(`Skipped inserting ${value}`);
	}

	return current;
}

However, functions and arrays - mostly because they can contain functions - are trickier.

Arrays do need to be reconciled, and there are a number of algorithms to do so out there. Since this is a piece that all rendering approaches (VDOM, Single Pass Reconciling, or Reactive) have in common, we won’t cover it here.

But functions are really the key to putting this all together. As mentioned earlier, most runtime techniques execute inside-out more or less.

VDOM libraries don’t care since after they create the Virtual DOM, they do a second pass to diff. Single Pass Reconcilers tend to put heavy boundaries on components so they can break apart execution and as a result have clear top-down “anchor points”.

But for Reactivity that needs to run under a scope, we need a different way. The approach Solid uses is recursive reactive layering.

Let’s consider that the function part of the insert function looks like:

// at top of function:
while (typeof current === "function") {
	current = current();
}

// in the conditional
if (t === "function") {
	createEffect(() => (current = insert(parent, value(), current)));

	return () => current;
}

If we pass a function in, it creates an effect that tracks it’s own child insert. In doing so, regardless of what the function returns, it knows how to handle inserting the new value.

Where it gets interesting is; what if that function also returns a function? We end up nesting effects, isolating their updates from each other like we did earlier, and executing them in top-down order. So, no matter how many nested dynamic component there are stacked, they will each only re-evaluate at their level and downwards.

Arrays with dynamic parts work similarly except we attempt to flatten the values at each level into a single array. This is where memos are especially useful, since if a layer updates due to one branch of the fragment, you don’t want to re-evaluate the others necessarily.

At the deepest layer, where all values are resolved, we can then diff with the DOM and apply our changes.

Spread

This is the other runtime method with some complexity.

While named properties that are passed can be analyzed, spreads have to be done at runtime which means they are always dynamic in some sense.

We loop over long set of conditionals that perform various updates all wrapped in an effect.

function spread(node, props) {
	let prevProps = {};

	createEffect(() => {
		let info;
		let p = props();

		for (const prop in p) {
			if (prop === "children") {
				insert(node, props.children);

				continue;
			}

			const value = props[prop];

			if (value === prevProps[prop]) {
				continue;
			}

			if (prop === "style") {
				style(node, value, prevProps[prop]);
			} else if (prop === "ref") {
				value(node);
			} else if ((info = Attributes[prop])) {
				if (info.type === "attribute") {
					node.setAttribute(prop, value);
				} else {
					node[info.alias] = value;
				}
			} else if (prop.indexOf("-") > -1 || prop.indexOf(":") > -1) {
				node.setAttributes(
					prop.replace(/([A-Z])/g, (g) => `-${g[0].toLowerCase()}`),
					value
				);
			} else {
				node[prop] = value;
			}
		}

		prevProps = p;
	});
}

There is a helper here to handle diffing style objects and we use insert here to handle children. There is a lookup for known attribute names like class or for to properly set them.

In the case of compiled approaches like JSX, unless the end-user spreads on HTMLElements, we do not need to include this code. But with what we have, it’s pretty easy to make a simple HyperScript h function.

function h(...args) {
	let e;

	function item(l) {
		const type = typeof l;

		if (l == null) {
			return;
		} else if (type === "string") {
			if (!e) {
				// create element tag
				e = document.createElement(l);
			} else {
				//create child text node
				e.appendChild(document.createTextNode(l));
			}
			// simple non-string values
		} else if (
			type === "number" ||
			type === "boolean" ||
			l instanceof Date ||
			l instanceof RegExp
		) {
			e.appendChild(document.createTextNode(l.toString()));
			// insert element or array
		} else if (l instanceof Element || Array.isArray(l)) {
			insert(e, l);
			// spread element attributes
		} else if (type === "object") {
			spread(e, l);
		} else if (type === "function") {
			// component
			if (!e) {
				let props = {};
				let dynamic = [];
				let next = args[0];

				// grab props object if present
				if (
					typeof next === "object" &&
					!Array.isArray(next) &&
					!(next instanceof Element)
				) {
					props = args.shift();
				}

				// test for dynamic expressions
				for (const k in props) {
					if (typeof props[k] === "function") {
						dynamic.push(k);
					}
				}

				// handle children
				props.children = args.length > 1 ? args : args[0];

				if (
					props.children &&
					typeof props.children === "function" &&
					!props.children.length
				) {
					dynamic.push("children");
				}

				// create the component
				e = createComponent(l, props, dynamic);
				args = [];

				// dynamic function expression
			} else {
				insert(e, l);
			}
		}
	}

	while (args.length) {
		item(args.shift());
	}

	// return element
	return e;
}

That’s more or less it. Using insert, spread, and createComponent, we have what we need to finish our template DSL.

And now, we can update our example to HyperScript and add a click handler for good measure:

function Greeting(props) {
	return ["Hi ", h("span", () => props.name)];
}

const rendered = createRoot(() => {
	const [visible, setVisible] = createSignal(false);
	const [name, setName] = createSignal("Josephine");

	return h(
		"div",
		{ onclick: () => setName("Geraldine") },
		() => visible() && h(Greeting, { name })
	);
});

document.body.appendChild(rendered);

Not exactly the JSX as seen in the beginning of the article, but more or less the same thing. We’d need to get into compilation which seems a good topic for another day…

Wrap Up

We’ve created a reactive renderer with a runtime-only HyperScript template DSL!

Its been a lot of pattern matching, breaking things apart, and setting up safeguards for efficient rendering.

The code in this article won’t all just piece together and work on its own either. We’ve cut a few places for simplicity and skipped any optimizations, but we’ve covered all of the core pieces.

Even compiled approaches like Solid’s JSX and Svelte have similar code and tackle the same problems. They are just able to optimize more efficiently; they can detect reactive expressions, identify certain expression grammar, and group instructions in the most efficient way.

useSignal() is the Future of Web Frameworks by Misko Hevery

A Signal is a way to store the state of our application, similar to React’s useState(), but there are some key differences that give Signals an edge.

What’s a Signal?

![[useSignal_What_Is_a_Signal.png]]

The key difference between Signals and State is that Signals return a getter and a setter, whereas non-reactive systems return a value (and a setter).

Some reactive systems return a getter/setter together, while some other return those are two separate references. The implementation details don’t matter here - the idea is the same.

State vs State

The issue is that the word State conflates two separate concepts.

Why is returning a getter better than returning a value?

Because by returning the getter, we can separate the passing of state-reference from the reading of the state-value.

Let’s look at this SolidJS example:

export function Counter() {
	const [getCount, setCount] = createSignal(0);

	return (
		<button onClick={() => setCount(getCount() + 1)}>{getCount()}</button>
	);
}

Looks the same to me…

The above explains how Signals are different from the good old state, but not why we should care.

Signals are reactive! This means that they need to keep track of who is interested in the state (subscriptions) and, if the state changes, notify the subscribers.

To be reactive, Signals must collect who is interested in the Signal’s value. They gain this information by observing the context in which the state-getter gets invoked. By retrieving the value from the getter, we are telling the Signal that this location is interested in the value. If the value changes, this location needs to be re-evaluated. In other words, invoking the getter creates a subscription.

This is why passing the state-getter rather than the state-value is important. The passing of state-value does not give the signal any information about where the value is actually used. This is why distinguishing between state-reference and state-value is so important in Signals.

For comparison, let’s look at the same example in Qwik.

export function Counter() {
	const count = useSignal(0);

	return <button onClick$={() => count.value++}>{count.value}</button>;
}

We notice that getter/setter has been replaced with a single object that has a .value property, which represents the getter/setter. While the syntax is differrent, the inner workings remain the same.

Importantly, when the button is clicked and the value is incremented, the framework only needs to update the text node from 0 to 1. It can do that because, during the initial rendering of the template, the Signal has learned the count.value has been accessed by the text node only. Therefore, it knows that if the value of the count changes, it only needs to update the text node and nothing else.

Shortcomings of useState()

Let’s look at how React uses useState() and its shortcomings.

export function Counter() {
	const [count, setCount] = useState(0);

	return <button onClick={() => setCount(count + 1)}>{count}</button>;
}

React’s useState() returns a state-value. This means that useState() has no idea how the state-value is used inside the component or the application.

The implication is taht once we notify React of state change through a call to setCount(), Reach has no idea which part of the page has changed, and therefore must re-render the whole component. This is computationally expensive.

useRef() does not rerender

React has useRef(), which is similar to useSignal(), but it does not cause the UI to re-render. This React example looks very similair to useSignal(), but it will not work…

export function Counter() {
	const count = useRef(0);

	return <button onClick={() => count.current++}>{count.current}</button>;
}

useRef() is used exactly like a useSignal() to pass a reference to the state rather than the state itself. What useRef() lacks are subscription traking and notifications.

The nice thing is that in Signal-based frameworks, useSignal() and useRef() are the same thing. useSignal() can do what useRef() does plus subscription tracking.

This further simplifies the API surface of the framework.

useMemo() built-in

Signals rarely require memoization because they do the least amount of work out of the box.

Let’s consider this Qwik example of two counters and two children components.

export function Counter() {
	console.log("<Counter />");

	const countA = useSignal(0);
	const countB = useSignal(0);

	return (
		<div>
			<button onClick$={() => countA.value++}>A</button>
			<button onClick$={() => countB.value++}>B</button>
			<Display count={countA.value} />
			<Display count={countB.value} />
		</div>
	);
}

export const Display = component$(({ count }: { count: number }) => {
	console.log(`<Display count={${count} />`);

	return <div>{count}!</div>;
});

In the above example, only the text node of one of the two Display components will be updated. The text node that doesn’t get update will never print after the initial render.

# Initial render output
<Counter />
<Display count={0} />
<Display count={0} />

# Subsequent render on click
(blank)

We actually cannot achieve the same in React because, at the very least, one component has to re-render.

Let’s take a look at how to memoize components in React to minimize the re-rendering.

export default function Counter() {
	console.log("<Counter />");

	const [countA, setCountA] = useState(0);
	const [countB, setCountB] = useState(0);

	return (
		<div>
			<button onClick={() => setCountA(countA + 1)}>A</button>
			<button onClick={() => setCountB(countB + 1)}>B</button>
			<MemoDisplay count={countA} />
			<MemoDisplay count={countB} />
		</div>
	);
}

export const MemoDisplay = memo(Display);

export function Display({ count }: { count: number }) {
	console.log(`<Display count={${count}} />`);

	return <div>{count}</div>;
}

But even with memoization…

# Initial render output
<Counter />
<Display count={0} />
<Display count={0} />

# Subsequent render on click
<Counter />
<Display count={1} />

And without memoization at all, we would see:

<Counter />
<Display count={0} />
<Display count={0} />

# Subsequent render on click
<Counter />
<Display count={1} />
<Display count={0} />

To be fair, isn’t that more due to React using a VDOM and Components as “rendering boundaries”, rather than an actual direct consequence of a non-Signal-based state management? How does Vue or Preact behave with both a VDOM and Signals?

Ryan Carniato explained in a stream on reactivity how Preact’s Signals serve as a “notification” mechanism for the VDOM to know which components to re-render, but shows that Signals won’t prevent the consequence of a Components-based-VDOM approach; the Components will still fully re-run. In essence, Signals enable, but do not guarantee, fine-grained reactivity.

When it comes to Vue, they currently use a compiler-backed approach to achieve fine-grained reactivity, and are looking into a Solid-inspired non-VDOM runtime alternative called “Vapor mode”. All of this is explored in more details [[#Reactivity in Vue|here]].

That is a lot more work than what Signals ahve to do, and this is why Signals work as if we memoized everything.

Prop Drilling

Let’s take a common example of implementing a shopping cart, implemented in React.

export default function App() {
	console.log("<App />");

	const [cart, setCart] = useState([]);

	return (
		<div>
			<Main setCart={setCart} />
			<NavBar cart={cart} />
		</div>
	);
}

export function Main({ setCart }) {
	console.log("<Main />");

	return (
		<div>
			<Product setCart={setCart} />
		</div>
	);
}

export function Product({ setCart }) {
	console.log("<Product />");

	return (
		<div>
			<button onClick={() => setCart((cart) => [...cart, "product"])}>
				Add to Cart
			</button>
		</div>
	);
}

export function NavBar({ cart }) {
	console.log("<NavBar />");

	return (
		<div>
			<Cart cart={cart} />
		</div>
	);
}

export function Cart({ cart }) {
	console.log("<Cart />");

	return <div>Cart: {JSON.stringify(cart)}</div>;
}

The state of the cart is usually lifted to the highest common parent between the buy button and where the cart is rendered.

Because the buy button and the cart are far appart in the DOM, this often is very close to the very top of the component render tree. In our case, we call it the “common ancestor” component.

The common ancestor component has two branches:

  1. One which drills the setCart functions through many layers of components until it reaches the buy button.
  2. The other drills the cart state through many layers of components until it reaches the component which renders the cart.

The problem is that every time we click the buy button, most of the component tree has to rerender. This leads to an output similar to this:

# "buy" button clicked
<App />
<Main />
<Product />
<NavBar />
<Cart />

If we do use memoization, then we can avoid the setCart prop-drilling branch, but not the cart prop-drilling branche, so the output would still look like so:

# "buy" button clicked
<App />
<NavBar />
<Cart />

WIth Signals, the output is like so:

# "buy" button clicked
<Cart />

This greatly reduces the amount of code that needs to execute.

Framework Support

Now, Signals are not new; they’ve existed in KnockoutJS and probably other frameworks before then.

What’s new is the DX surrounding them, which has greatly improved through the use of compilation and deep integration with JSX, which makes them a pleasure to use.

Conclusion

A Signal is a way to store state in an application, similar to useState() in React. However, the key differences is that Signals return a getter and a setter, whereas non-reactive systems return only a value and a setter.

It is important because Signals are reactive, meaning they need to keep track of who is interested in the state and notify subscribers of state changes. This is achieved by observing the context in which the state-getter is invoked, which creates a subscription.

In contrast, useState() in React returns only the state-value, meaning it has no idea how the state-value is usd and must re-render the whole component tree in response to state changes.

In recent years, Signals have seen vast DX improvements which makes them no harder to use today than more “traditional” systems.

Rendering & Reactivity in Vue

Sources

How Reactivity Works in Vue

In JS, there are no built-in mechanism to track the reading and writing of local variables. What we can do though, is intercept the reading and writing of object properties.

There are two ways of intercepting property access in JS: getter/setter and Proxies.

Vue2 used getters/setters exclusively due to browser support limitations.

In Vue3, Proxies are used for reactive objects and getters/setters are used for refs.

Here’s some pseudo-code that illustrates how they work:

function reactive(obj) {
	return new Proxy(obj, {
		get(target, key) {
			track(target, key);

			return target[key];
		},
		set(target, key, value) {
			target[key] = value;
			trigger(target, key);
		},
	});
}

function ref(value) {
	const refObject = {
		get value() {
			track(refObject, "value");

			return value;
		},
		set value(newValue) {
			value = newValue;
			trigger(refObject, "value");
		},
	};

	return refObject;
}

This explains a few of the following limitations of reactive objects:

Inside track(), we check whether there is a currently running effect. If there is one, we lookup the subscriber effects (stored in a Set) for the property being tracked, and add the effect to the Set:

/* This will be set right before an effect is about to be run.
	We'll deal with this later */
let activeEffect;

function track(target, key) {
	if (activeEffect) {
		const effects = getSubscribersForProperty(target, key);
		effects.add(activeEffect);
	}
}

Effect subscriptions are stored in a global WeakMap<target, Map<key, Set<effect>>> data structure. If no subscribing effects Set was found for a property (tracked for the first time), it will be created. This is what the getSubscribersForProperty() function does.

Inside trigger(), we again lookup the subscriber effects for the property, but this time we invoke them instead:

function trigger(target, key) {
	const effects = getSubscribersForProperty(target, key);
	effects.forEach((effect) => effect());
}

Now, let’s circle back to the whenDepsChange() function:

function whenDepsChange(update) {
	const effect = () => {
		activeEffect = effect;
		update();
		activeEffect = null;
	};
	effect();
}

It wraps the raw update function in an effect that sets itself as the current active effect before running the actual update. This enables track() calls during the update to locate the current active effect.

At this point, we have created an effect that automatically tracks its dependencies, and re-runs whenever a dependency changes. We call this a Reactive Effect.

Vue provides an API that allows us to create reactive effects: watchEffect(). In fact, it works pretty similar to the magical whenDepsChange() in the example. We can now rework the original example using actual Vue APIs:

import { ref, watchEffect } from "vue";

const A0 = ref(0);
const A1 = ref(1);
const A2 = ref();

watchEffect(() => {
	// tracks A0 and A1
	A2.value = A0.value + A1.value;
});

// triggers the effect
A0.value = 2;

Using a reactive effect to mutate a ref isn’t the most interesting use case - in fact, using a computed property makes it more declarative:

import { ref, computed } from "vue";

const A0 = ref(0);
const A1 = ref(1);
const A2 = computed(() => A0.value + A1.value);

A0.value = 2;

Internally, computed manages its invalidation and re-computation using a reactive effect.

So what’s an example of a common and useful reactive effect? Well, updating the DOM! We can implement simple “reactive rendering” like this:

import { ref, watchEffect } from "vue";

const count = ref(0);

watchEffect(() => {
	document.body.innerHTML = `count is: ${count.value}`;
});

// updates the DOM
count.value++;

In fact, this is pretty close to how a Vue component keeps the state and the DOM in sync - each component instance creates a reactive effect to render and update the DOM. Of course, Vue components use much more efficient ways to update the DOM than innerHTML. More to come on that.

The ref(), computed(), and watchEffect() APIs are all part of the CompositionAPI, which is close to how Vue3’s reactivity system works under the hood. In fact, Vue3’s OptionsAPI simply wraps the CompositionAPI internally.

Runtime vs Compile-time Reactivity

Vue’s reactivity system is primarily runtime-based: the tracking and triggering are all performed while the code is running directly in the browser.

The pros of runtime reactivity are that it can work without a build step, and there are fewer edge cases. On the other hand, this makes it constrained by the syntax limitations of JavaScript, leading to the need of value containers like Vue’s refs.

Some frameworks, such as Svelte, choose to overcome such limitations by implementing reactivity during compilation. It analyzes and transforms the code in order to simulate reactivity. The compilation step allows the framework to alter the semantics of JavaScript itself - for example, implicitly injecting code that performs dependency analysis and effect triggering around access to locally defined variables. The downside is that such transforms require a build step, and altering JS semantics is essentially creating a language that looks like JS, but compiles into something else.

The Vue team did consider this option but chose not to go for it.

Connection to Signals

Quite a few other frameworks have introduced reactivity primitives similar to refs from Vue’s CompositionAPI, under the terms “Signals”.

Fundamentally, Signals are the same kind of reactive primitives as Vue refs. It’s a value container that provides dependency tracking on access, and side-effect triggering on mutation.

This reactivity-primitive-based paradigm isn’t a particularly new concept in the frontend world and dates back to Knockout and Meteor, more than a decate ago. Vue OptionsAPI and React’s library MobX are also based on the same principle, but hide the primitives behind object properties.

Relationship with Fine-Grained Reactivity

Although not a necessary trait for something to qualify as Signals, today the concept is often discussed alongside the rendering model where updates are performed through fine-grained subscriptions.

Due to the use of VDOM, Vue currently relies on a compiler to achieve similar optimizations. We’ll see more about this later.

However, Vue’s also exploring a new Solid-inspired compilation strategy, called “Vapor Mode”, that doesn’t rely on a VDOM at all and takes more advantage of Vue’s built-in reactivity system.

API Design Trade-Offs

The design of Preact and Qwik’s Signals is very similar to Vue’s shallowRef(): all three provide a mutable interface via the .value property. Thus, we’ll focus this discussion on Solid and Angular Signals.

Solid Signals

Solid’s createSignal() API design emphasizes read / write segregation. Signals are exposed as a read-only getter and a separate setter:

const [count, setCount] = createSignal(0);

// access the value.
count();

// update the value.
setCount(1);

We notice how the count Signal can be passed down without the setter. This ensures that the state can never be mutated unless the setter is also explicitly exposed. Whether this safety guarantee justifies the more verbose syntax or not could be subject to the requirement of the project and personal taste - but in case we prefer this API style, we can easily replicate it in Vue:

import { shallowRef, triggerRef } from "vue";

export function createSignal(value, options) {
	const r = shallowRef(value);
	const get = () => r.value;
	const set = (v) => {
		r.value = typeof v === "function" ? v(r.value) : v;

		if (options?.equals === false) {
			triggerRef(r);
		}
	};

	return [get, set];
}

Angular Signals

Angular is undergoing some fundamental changes by foregoing dirty-checking and introducting its own implementation of a reactivity primitive.

The Angular Signal API currently looks like this:

const count = signal(0);

// access the value.
count();

// set a new value.
count.set(1);

// update based on previous value.
const.update((v) => v + 1);

// mutate deep objects with same identity.
const state = signal({ count: 0 });
state.mutate((o) => {
	o.count++
});

This API, replicated in Vue:

import { shallowRef, triggerRef } from "vue";

export function signal(initialValue) {
	const r = shallowRef(initialValue);
	const s = () => r.value;
	s.set = (value) => {
		r.value = value;
	};
	s.update = (updater) => {
		r.value = updater(r.value);
	};
	s.mutate = (mutator) => {
		mutator(r.value);
		triggerRef(r);
	};

	return s;
}

Summary

Compared to Vue refs, Solid and Angular’s getter-based API style provide some interesting trade-offs when used in Vue components:

Whether these API styles suit us or not is, to some extent, subjective. Our goal here is to demonstrate the underlying similarity and trade-offs between these different API designs.

Rendering Mechanism

How does Vue take a template and turn it into actual DOM nodes? How does Vue update those DOM nodes efficiently? We will attempt to shed some light on these questions here by diving into Vue’s internal rendering mechanism.

Virtual DOM

The VDOM is a programming concept where an ideal, or “virtual” representation of a UI is kept in memory and synced with the real DOM. The concept was pioneered by React, and has been adapted in many other framworks with different implementations, including Vue.

VDOM is more of a pattern than a specific technology, so there is no one canonical implementation.

We can illustrate the idea using a simple example:

const vnode = {
	type: "div",
	props: {
		id: "hello'",
	},
	children: [
		/* more vnodes */
	],
};

Here, vnode is a plain JS object (a “virtual node”) representing a <div> element. It contains all the information that we need to create the actual element. It also contains more children vnodes, which makes it the root of a virtual DOM tree.

A runtime renderer can walk a virtual DOM tree and construct a real DOM tree from it. This process is called mount.

If we have two copies of virtual DOM trees, the renderer can also walk and compare the two trees, figuring out the differences, and apply those changes to the actual DOM. This process is called patch, also known as “diffing” or “reconciliation”.

The main benefit of a VDOM is that it gives the developer the ability to programatically create, inspect, and composed the desired UI structures in a declarative way, while leaving the direct DOM manipulations to the renderer.

Render Pipeline

At a high level, this is what happens when a Vue component is mounted:

  1. Compile

    Vue templates are compiled into render functions - that is functions taht return virtual DOM trees. This step can be done either ahead-of-time via a build step, or on-the-fly by using the runtime compiler.

  2. Mount

    The runtime renderer invokes the render functions, walks the returned virtual DOM tree, and creates actual DOM nodes based on it. This step is performed as a reactive effect, so it keeps track of all reactive dependencies that were used.

  3. Patch

    When a dependency used during mount changes, the effect re-runs. This time, a new, updated VDOM tree is created. The runtime renderer walks the new tree, compares it with the old one, and applies the necessary updates to the actual DOM.

![[Vue_Render_Pipeline_Phases_Digram.png]]

Templates vs Render Functions

Vue templates are compiled into VDOM render functions. Vue also provides APIs that allow us to skip the template compilation step and directly author render functions.

Render functions are more flexible than templates when dealing with highly dynamic logic, because we can work with vnodes using the full power of JavaScript.

So why, then, does Vue recommend templates by default? There are a number of reasons:

  1. Templates are closer to actual HTML. This makes it easier to reuse HTML snippets, apply accessibility best practices, style with CSS, and for designers to understand and modify.
  2. Templates are easier to statically analyze due to their more deterministic syntax. This allows Vue’s template compiler to apply many compile-time optimizations to improve the performance of the VDOM (more details below).

In practice, templates are sufficient for most use cases in applications. Render functions are typically only used in reusable components that need to deal with highly dynamic rendering logic.

More details on render functions in the Render Functions & JSX section of the official docs.

Compiler-Informed Virtual DOM

The virtual DOM implementation in React and most other virtual-DOM implementations are purely runtime: the reconciliation algorithm cannot make any assumptions about the incoming VDOM tree, so it has to fully traverse the tree and diff the props of every vnode in order to ensure correctness.

In addition, even if a part of the tree never changes, new vnodes are always created for them on each re-render, resulting in unnecessary memory pressure. This is one of the most criticized aspect of VDOM: the somewhat brute-force reconciliation process sacrifices efficiency in return for declarativeness and correctness.

But it doesn’t have to be that way! In Vue, the framework controls both the compiler and the runtime. This allows them to implement many compile-time optimizations that only a tightly coupled renderer can take advantage of. The compiler can statically analyze the template and leave hints in the generated source code so that the runtime can take shortcuts whenever possible. At the same time, we still preserve the capability for the user to drop down to the render function layer for more direct control inedge cases.

They call this hybrid approach Compiler-Informed VDOM.

Let’s look at a few major optimization done by Vue’s template compiler to improve the VDOM’s runtime performance.

Static Hoisting

Quite often, there will be parts in a template taht do not contain any dynamic bindings:

<div>
	<div>foo</div> <!-- hoisted -->
	<div>bar</div> <!-- hoisted -->
	<div>{{ dynamic }}</div>
</div>

See in details in Vue’s Template Explorer

The foo and bar divs are static - re-creating vnodes and diffing them on each re-render is unnecessary. The Vue compiler automatically hoists their vnode creation call out of the render function, and reuses the same vnodes on every render. The renderer is also able to completely skip diffing them when it notices the old vnode and the new vnode are the same one.

Since the identity’s the same (===), it doesn’t need to compare the structure.

In addition, when there are enough consecutive static elements, they will be condensed into a single “static vnode” that contains the plain HTML string for all these nodes.

Unclear what “enough” here actually means. Doesn’t seem to be a hard count as experimented with. The details of the decision engine here aren’t disclosed.

These static vnodes are mounted by directly setting innerHTML. They also cache their corresponding DOM nodes on initial mount - if the same piece of content is reused elsewhere in the app, new DOM nodes are created using native cloneNode() - which is extremely efficient.

Patch Flags

For a single element with dynamic bindings, we can also infer a lot of information from it at compile time:

<!-- class binding only -->
<div :class="{ active }"></div>

<!-- id and value bindings only -->
<input :id="id" :value="value">

<!-- text children only -->
<div>{{ dynamic }}</div>

See in details in Vue’s Template Explorer

When generating the render function code for these elements, Vue encodes the type of update each of them needs directly in the vnode creation call:

createElementVNode(
	"div",
	{
		class: _normalizeClass({ active: _ctx.active }),
	},
	null,
	2 /* CLASS */
);

The last argument, 2, is a patch flag. An element can have multiple patch flags, which will be merged into a single number. The runtime renderer can then check against the flags using bitwise operations to determine whether it needs to do certain work:

if (vnode.patchFlag & PatchFlags.CLASS /* 2 */) {
	// update the element's class
}

Bitwise checks are extremely fast. With the patch flags, Vue is able to do the least amount of work necessary when updating elements with dynamic bindings.

Vue also encodes the type of children a vnode has. For example, a template that has multiple root nodes is represented as a fragment. In most cases, we know for sure that the order of these root nodes will never change, so this information can also be provided to the runtime as a a patch flag:

export function render() {
	return (
		_openBlock(),
		_createElementBlock(
			_Fragment,
			null,
			[
				/* children */
			],
			64 /* STABLE_FRAGMENT */
		)
	);
}

The runtime can thus completely skip child-order reconciliation for the root fragment.

Tree Flattening

Taking another look at the generated code from the previous example, we notice the root of the returned VDOM tree is created using a special createElementBlock() call:

export function render() {
	return (
		_openBlock(),
		_createElementBlock(
			_Fragment,
			null,
			[
				/* children */
			],
			64 /* STABLE_FRAGMENT */
		)
	);
}

Conceptually, a “block” is a part of the template that has a stable inner structure. In this case, the entire template has a single block because it does not contain any structural directives like v-if and v-for.

Each block tracks any descendant nodes (not just direct children) that have patch flags. For example:

<div> <!-- root block -->
	<div>...</div>           <!-- not tracked -->
	<div :id="id"></div>     <!-- tracked -->
	<div>                    <!-- not tracked -->
		<div>{{ bar }}</div> <!-- tracked -->
	</div>
</div>

The result is a flattened array that contains only the dynamic descendant nodes:

div (block root)
- div with :id binding
- div with {{ bar }} binding

When this component needs to re-render, it only needs to traverse the flattened tree instead of the full tree. This is called Tree Flattening, and it greatly reduces the number of nodes that need to be traversed during VDOM reconciliation. Any static parts of the template are effectively skipped.

v-if and v-for directives will create new block nodes:

<div> <!-- root block -->
	<div>
		<div v-if> <!-- if block -->
			...
		</div>
	</div>
</div>

A child block is tracked inside the parent block’s array of dynamic descendants. This retains a stable structure for the parent block.

Impact on SSR Hydration

Both patch flags and tree flattening also greatly improve Vue’s SSR Hydration performance:

All credit for the source and/or referenced material goes to the named author(s).