Notice to Reader

Learning a framework that is still growing and changing is a difficult task. Creating helpful resources that keep up with the framework is a challenge upon that. Here are some resources to help you along your journey as we build out documentation.

This book is still very much a work in progress. It is online in the hope that it will help folk, even in its very incomplete and unedited state.

Tips for learning Leptos

  1. Join the discord server
  2. Keep an eye on the official written documentation . You can find a roadmap here.
  3. Subscribe to the YouTube channel of Greg Johnston (gbj), the founding author of Leptos.
  4. Listen to the Rustacean Station Podcast episode with Greg Johnston
  5. Leptos is heavily inspired by the SolidJS framework. Watch the YouTube video in which Greg talks to the author of SolidJS, Ryan Carniato, about Leptos.
  6. Watch some great Leptos videos on YouTube to familiarize yourself with the concepts of the framework, like Fine Grained Reactivity with Leptos, Rust, and WASM by Chris Biscardi.
  7. Read the official documentation. In the top left corner you'll see a single cube in the header nav with an arrow. Within that menu you'll find a list of dependencies to crates used by leptos, which explain more features in detail, like leptos_server, leptos_reactive, leptos_dom, and leptos_config
  8. Take a look at the official Leptos repo for the most up to date inline documentation.
  9. Review and build projects from the examples folder which show how Leptos can be used to build applications.

The community working on Leptos is aware of the missing pieces and are working hard to grow the framework while helping other people get involved. If you see a specific topic not covered in documentation, we invite you to author the article and lend a hand to those who come after you.

Preface

This book is in a notes status, formed as a collection of lessons. The structure of the book and its content will change significantly through its development. It is published online in this state, in an effort to share continued development and progress. My hope that the information here in is useful in some capacity. A document of this scope is a significant undertaking and getting it right as Leptos and Rust evolve in context of my own knowledge will take time.

And so, pardon the multitude of typos, partial thoughts, potentially out of date bits of code—caveat emptor and enjoy.

Peace, John

About this Book

This book was written in Obsidian and assembled for online reading with mdbook.

Other places to learn Rust

Websites

  • https://tourofrust.com — A fantastic step by step overview of Rust's language features

Videos

Books

Interactive Exercises

  • Rustlings — A set of Rust examples with problems that you'll need to fix to progress through the exercises.
  1. Install Rust
  2. Using up-to-date versions of rustc with Nightly
  3. Using up-to-date versions of Leptos from git

1. Install Rust

Detail instructions on how to install Rust for your computer can be found here: https://www.rust-lang.org/tools/install

Installing rust will add a few things to your system.

  1. rustc - the rust compiler
  2. rustup - a tool for managing rustc and the rust toolchain (https://rustup.rs)
  3. cargo - the package manager and helper tool for rust (https://doc.rust-lang.org/stable/cargo/)

2. Using up-to-date versions of rustc with Nightly

rustc is the rust compiler. It's possible to run different versions of the compiler. The Rust
community is always working away adding new features. These new features are available
immediately through nightly builds. Leptos, being brand new, makes use of some of these new
features and currently requires nightly to run.

To confirm that you're using the nightly build of rustc (the rust compiler), open your
shell/terminal and run the following command:

rustc -V  

It should output something like this with 'nightly' in it:

rustc 1.67.0-nightly (e631891f7 2022-11-13)  

If your version isn't the nightly build, run the following shell/terminal command:

rustup default nightly  

Rustup is used to manage rustc. By calling the above, rustc is updated to us the nightly build
as its default. You can change this to stable by using the following shell/terminal command:

rustup default stable  

3. Using up-to-date versions of Leptos from git

Leptos is changing all the time as well. It's recommended to grab the latest version directly
from their git repository instead of from crates.io (https://crates.io/crates/leptos).

I'll go into detail on exactly how to do this when we start building our app. Don't stress if
the following looks unfamiliar.

[dependencies]  
leptos = { git = "https://github.com/gbj/leptos" }  

Creating your first app

  1. Using cargo to create a new rust app
  2. Running your first rust app
  3. Adding Leptos to your application as a dependency
  4. Adding index.html to your application
  5. Serving your index.html and bundling WASM with trunk
  6. Updating client side HTML using Leptos

1. Using cargo to create a new rust app (cargo new)

New rust projects are created with the following terminal command:

I'm calling my project tut-leptos-client-side-event, keeping in mind thst we're testing out how to handle a simple client side event.

cargo new tut-leptos-client-side-event  

Did you know?
Cargo new will create the new project in your current working directory. You can add path specifications to the application name to change where it's scafolded to. For example, cargo new ~/dev/my-new-app will create a new rust app in the dev directory inside your ~/ use home directory. If you see ~/ know that it's a shorthand for your user
home. On OSX that would be /Users/your-user-name.

When cargo runs with the new command, it creates the folder tut-leptos-client-side-event.

This folder gets setup with a few important things.

  1. A src directory that will contain all of our source code
  2. A src\main.rs file, which contains our main function which is our app. This is called to
    tart our application and everything is run by calling code inside of it.
  3. A cargo.toml file which contains meta data about our app, and it's dependences.
  4. A target directory that will contain compiled data of our app. Ignore this folder for now.

2. Running your first rust app (cargo run)

Recall that we just made a new app with cargo new tut-leptos-client-side-event. Now we want to run it! Using the termninal/shell command cargo run will compile and run our app. Entering this terminal/shell command will not work right away. You'll get an error message:

error: could not find Cargo.toml in ` ....... or any parent directory

Cargo needs that cargo.toml file for context. It has information about which version of rust to compile for, which external bits of code (dependencies) need to be gathered to do the compilation, and so forth.

Changing the directory of your present working directory to the directory created by cargo new will allow us to use the cargo.toml file for context, letting us compile the app.

cd – is the terminal/shell command for changing directory

pwd – is the terminal/shell command for printing the present working directory

The following list of commands need to be input individual, one line at a time. The first command changes the present working directory to our user home directory:

cd ~/  
cargo new tut-leptos-client-side-event  
cd tut-leptos-client-side-event  
cargo run  

The application will take a brief period to compile and it'll print Hello, world! to your terminal/shell.

3. Adding Leptos to your application as a dependency

We're going to add leptos to the mix as a dependency for our rust application.

First let's take a look at our stock cargo.toml

[package]  
name = "tut-leptos-client-side-event"  
version = "0.1.0"  
edition = "2021"  
  
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html  
  
[dependencies]  

Note that we have no dependencies listed. All that exists is the heading [dependencies].

Normally we'd use cargo to help us add dependencies. We'd need to call cargo in the
context of our rust application's cargo.toml like we did with cargo run.

From within the tut-leptos-client-side-event folder we can call the following terminal/shell
command:

cargo add leptos  

Our Cargo.toml now includes the following:

[dependencies]  
leptos = "0.0.18"  

In getting started we talked about using the git repository to grab the most up to date version of the dependency instead of the version published on crates.io (the rust package repository).

To do this we'll actually change the leptos entry to:

leptos = { git = "https://github.com/gbj/leptos" }  

4. Adding index.html

Our rust application will compile to wasm. That wasm will interact with a web page to create our client side experience. For this to work, we'll need to create an index.html.

Create this file in the root of your app, alongside cargo.toml. Your app directory should look like this:

/tut-leptos-client-side-event  
	/src
		main.rs
	cargo.toml
	index.html

Inside the index.html should contain the following:

<!DOCTYPE html>  
<html>  
<head>  
    <title>Leptos App</title>
	<link data-trunk rel="rust" data-wasm-opt="z"/>
</head>  
<body></body>  
</html>    

The important part of this is the following tag:

<link data-trunk rel="rust" data-wasm-opt="z"/>

A tool called trunk is going to eventually put all of these pieces together. The above <link> element will be replaced with rust application, compiled to wasm.

4. Serving your index.html and bundling WASM with trunk

To use our application on the web, we need to serve it and bundle the WASM with the HTML.

We're going to use a tool called trunk which will do a few things:

  1. It'll serve index.html so that we can view it in our browser
  2. It'll use cargo to compile the application to WASM
  3. It'll attach the compiled WASM to our index.html, replacing <link data-trunk rel="rust" data-wasm-opt="z"/>

You will need to install the trunk tool. Instructions can be found here: https://trunkrs.dev/#install

For covenience, this will probably work:

cargo install --locked trunk

To serve your app, use the following terminal/shell command while your application root is your present working directory:

trunk serve

You'll see a variety of diagnostic information output to your prompt. The important line is this"

2022-11-26T15:40:19.251657Z  INFO 📡 serving static assets at -> /
2022-11-26T15:40:19.251861Z  INFO 📡 server listening at http://127.0.0.1:8080

Listening at 127.0.0.1:8080 means that you can type that into your web browser to send a request to the server and it will provide the static files with your WASM bundled in (because it is a static file) back as the response.

You now have a web page!

5. Updating client side HTML using Leptos

5.1 Understanding main.rs

At this point we have a Rust application which compiles to WASM and we have a server running, listening at 127.0.0.1:8080 for requests, responding with our index.html and linked assets, most importantly our Rust application in WASM form.

What we don't have here is anything that updates our index.html or any form of interaction between our Rust application (WASM) and the DOM (Document Object Model — a name for the hierarchy of html elements/nodes) in the index.html.

We've addd leptos to our application as a dependency, and now we're going to put it to use.

If we look in our src/main.rs we can see the following:

fn main() {  
    println!("Hello, world!");  
}

Here we have a function (using the keyword fn) with the name main. The ( and ) are like bookends that encapsulate a functions parameters (buckets to hold arguments passed to them) and arguments (values assigned to the parameters when called) that the function might use to run. In this case, the main function doesn't require anything to run, so it has nothing between its parenthesis after the function name. The following set of curley braces { and } encapsulate the function body here. This is what will be evaluated when the function is run; the work being done. This is the most minimal example of a function signature. There are more things that can be added but we'll get into those later.

The body of the function contains a single expression. Expressions need to end with ;. You can think of it as a terminator for the end of an instruction or step that you want the application to perform.

Let's look at the content of this line.

We have println!("Hello, world!");

We can look at this as some-command(some-arguments)end

The command is println!, the argument is a sequence of characters wrapped by quotes as a convenient way to tell the compiler that you mean the characters and not other commands or variables, and an end of expression character ; semicolon.

The command println! is provided by Rust's standard library for you to use to output text to the terminal. If you run your application you'll see Hello, world!, and this is why.

Important: We've glossed over how to write your own functions with parameters. We've also skipped over how to write functions that return values. Don't worry, we'll cover that when appropriate.

Macros

We saw before that the main function is written as fn main(){}. There is no ! after main. But there is a ! after println!.

The ! indicates that command is a macro. Macros are like code snippits or code templates that get expanded by the Rust compiler before it's final compilation.

There are function like macros, which use () to encapsulate their argumens. There are also procedural macros, which use {} to encapsulate a body of code which gets consumed by the macro.

As you can imagine, there is a lot involved in actually printing something to the terminal, but we can ignore the complexity with things like println!.

Macros have parameters which you can pass arguments to, just like functions. Leptos makes extensive use of macros to make our lives easier. They're wonderful!

Important: Macros have the ability to parse (read through to understand/process) their arguments differently from standard Rust code. Keep in mind that the macro author is usually trying to do things that make life easier for the developer using their macro. Sometimes this includes reduction of noisey syntax that would normally be required, or inferences that can be assumed.

5.2 Updating main.rs to use Leptos

Now that we understand how Rust functions work we can start to bring Leptos into our main.rs.

We do this by telling the compiler that we want to use leptos. We've added leptos as a dependency in our cargo.toml, so it now exisgts is our 'application universe' as a thing.

But, it doesn't exist in our main.rs because we haven't brought it into scope yet. Bringing things into scope is like bringing things to a workbench or crafting table to use. You need those things at hand, where you're working, so that when you refer to them the compiler knows what you mean and has the bits of code to actually use.

When we write use leptos::*; at the top of our main.rs file, we're telling Rust, use and think called leptos which you should be aware of because we defined it in our cargo.toml, and bring ALL of it's pieces into scope for us to use. The :: is a separator the same a slash is a separator for hierarchy in your computer's file system. The * refers to 'everything'.

use leptos::*;

reads as

use everything from leptos.

To visualize this, think of it as taking a box of tools called "Leptos" and dumping all of them out onn your work bench. You can now grab any one of them for use.

5.3 Updating fn main() to interact with your html

In our main.rs we have a fn main(){}. Currently it prints "Hello, world!" to our standard out (terminal/console).

What we want to do is to change the HTML in index.html when the WASM loads, which is also when the fn main() runs.

We'll use a function called mount_to_body, which is provided as a tool in leptos, made available in this scope (this main.rs file) with the use statement.

use leptos::*;

fn main() {
	mount_to_body()
}

mount_to_body requires some arguments to run correctly.

Specifically, it requires a closure. It requires a value that is actually 'runable' or 'callable'.

Closures

A closure is functionality as a first class citizen. This means that it's a function that can be stored as a value and passed around to be called later.

#![allow(unused)]
fn main() {
fn print_hi(){
	println!("Hi");
}
}

A standard function definition.

fn main(){
	// let tells the compiler to assign
 	// the value of greeter to whatever is 
 	// after the = and before the semicolon.
	let greeter = || {
		println!("Hi");
	};
}

Single line comments in Rust are prefixed by // at the beginning of the commend. These tell the compiler to ignore anything after it.

#![allow(unused)]
fn main() {
|| {
	println!("Hi");
}
}

A closure

The above closure syntax is like a function, but it doesn't have a name because we're expecting to assign the functionality to a name. Like we did with greeter above.

The parenthesis that normally encapsulate a functions arguments are converted to pipe characters to diambiguate the two. The body of the closure, just like the body of a function, is encapsulated by curly braces.

The following shows how a function and a closure can be called:

#![allow(unused)]
fn main() {
print_hi(); // This was a function
greeter();  // This was a value `greeter`
}

The coolest thing here is that we can see both print_hi and greeter are names that exist in or applications context. They're ideas. Both of them are callable. And we can call them by adding parenthesis at the end.

This starts to hint at some of the underlying simplicity of a lot of programming. At the end of the day, we're giving names to things so that we can specify to the computer, what is what. Then we evaluate or run a bit of functionality, and give the result a name so that we can do something else after. It's this over and over again, all the way down.

Using mount_to_body

Recall that our application's main.rs looked like this

use leptos::*;

fn main() {
	mount_to_body()
}

We have mount_to_body function being called when the application runs as WASM, when index.html is server with the WASM resource.

This function needs functionality to call. It needs a closure. We have the opportunity to tell it what to do with the assumption that when mount_to_body runs, it'll provide us the context in which it's running. This of this as a scope. We can make this assumption because mount_to_body specifies that it needs a closure that makes use of an argument, which we know to be context, abbreviated here as cx.

use leptos::*;

fn main() {
	mount_to_body(|cx|{})
}

The above shows what an empty closure being passed to mount_to_body looks like. What this doesn't show is that the closure needs to return something that can be mounted.

If you ran the above you're probably receive an error like:

T: Mountable, required by this bound in `leptos::mount_to_body`

The error messages will get easier to read over time, but it essentially says, "The return type of the closure can't be used by the internals of mount_to_body. It was expecting something specific to come out of your instructions."

To solve this problem we're going to use the view! macro provided by leptos.

use leptos::*;

fn main() {
	mount_to_body(|cx|{
		view! {  
	        cx,  
	        <h1>"Hello, world!"</h1>  
	    }
	})
}

We've written the following in the body of the closure, being provided to mount_to_body as:

#![allow(unused)]
fn main() {
view! {  
	cx,  
	<h1>
		"Hello, world!"
	</h1>  
}
}

This procedural macros view! has a body which starts with cx, the context that will be provided to it by mount_to_body when it's run (again, this is inside mount_to_body and evaluated at a later time) and the view or html to mount.

There must be one top level item and all text needs to be quoted. It's a JSX like syntax and beautifully streamlined to write.

If you had trunk serve running this whole time, you can visit http://127.0.0.1:8080 to see your "Hello, world!"

Or, make sure your present working directory is the root of your application, type trunk serve and visit http://127.0.0.1:8080 to see your first working leptos WASM client side awesomeness!

Introduction

Intro to HTML

What we know

In Setup we developed a cursory understanding of:

  • how to create a generic Rust application
  • how to add Leptos as a dependency to our Rust application
  • how to serve a file with trunk
  • how to update an HTML file with Rust, using Leptos's mount_to_body function and it's view! macro

What we'll learn

  • Working with HTML and developing a mental model

Where we're at

Code from our main.rs looks like this:

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <h1>"Hello, world!"</h1>  
        }  
    })  
}

The main function fn is run when our application runs as WASM.

Recall that the trunk tool uses cargo also a tool with rustc to compile it to the WASM target, which gets served and linked to our index.html. We view (request) the page in our browser, loading the html and linked WASM, kicking the whole thing off.

When the application runs the function mount_to_body is called (runs), which we pass (or provide) a closure (a big of functionality stored as a value) as an argument to it's callback parameter (the bucket that holds things that mount_to_body needs to run, "function dependencies").

When mount_to_body runs, it takes the functionality we've provided as a closure (a strategy if you will) and calls it (makes it run) with its runtime context cx. This does all of the heaving lifting to write our heading the body of our HTML page in index.html

Lesson: Working with HTML and developing a mental model

We're not doing much more than creating a static template. If this is all you need, better to stick with a plain old HTML file.

HTML Elements and Tags

HTML is made up of Elements. There are a whole list of HTML elements ready for use and supported by all current browsers, from heading and paragraphs to form elements for collecting data from users like. These Elements are written using HTML tags, <h1>, <p>, and <input> respectively.

Tags with content

Some tags have content. The syntax is to encapsulate the content or wrap it with opening and closing tags. The closing tag has a slash before the tag name.

	<h1>Some Content</h1>

This Heading 1 tag has content, which requies a closing tag so that it's content can be wrapped/encapsulated.

What's neat about this opening and closing tag business is that it's not that different from when we called a function and provided an argument (value) for it's parameter. As time goes on you'll start to see a pattern emerging. The above isn't going to look at different from:

#![allow(unused)]
fn main() {
	h1("Some Content")
}

Tags without content

Some tags don't have content. To express a tag without content we add a backslash at the end of the tag of the tag. This tells browsers that there is no closing tag.

<hr />

This Horizontal Rule tag doesn't have a closing tag

Tag configuration with properties and attributes

HTML Elements can be configured by setting values for supported properties and attributes. If you've played around with HTML before you'll probably have seen commong properties like id and class:

<h1 id="my-unique-heading">Hello, world!</h1>

Some poperties have specific requirements for their values. id for example, should have a unique value across all Elements on the page.

<input name="first-name" placeholder="Enter your name..." type="text" />

The input tag has a type which completely changes how its rendered (displayed to the user).

The browser as interpreter

When you send a request to the server, it returns a response which has a body (the data, often as text) and headers (meta information about the body). Information in the headers tells the browser how to interpret the body.

Analogy time: Imagine if you went to a library and asked a librarian for a book. This is like you, the web browser, submitted a request to a server. The librarian (the server) will then provide a response to your request. They may return with the book and a slip of paper saying, "I found the book and this book is in english." We can now use our knowlege of the english language to parse the book (turn it into meaningful data) and understand it.

Traditionally servers respond to web requests telling the browser that the body's response is text/html. A browser very deeply wants to render your page for you, so it dutifully reads through what it's been told is html, parses it into meaningful data (the DOM, Document Object Model), and renders it to the screen.

Things like:

<input name="first-name" placeholder="Enter your name..." type="text" />

Turn more into an object (a thing) with the following properties:

HTML Element Type = "input"
name = "first-name"
placeholder = "Enter your name..."
type = "text"

This isn't real code, but it does look a lot like what we'll call a struct in Rust later. Once again we can see similar shared underlying principles. This idea of "a thing with stuff" comes up time and time agin.

There may be bits of information that the browser doesn't understand. Instead of crashing it often ignores this unknown information, or makes assumptions about it to still continue to render the page.

Browsers and HTML rendering engines are extremely complex and down right magical. We can throw so much at them and they keep on going.

What the element?

Recall that before we talked about HTML elements, properties, and attributes. It might feel like HTML is an expressive programming language, but it is actually what we would call a DSL (Domain Specific Language). They're instructions that pertain specific to rendering web pages that tell the browser our intent. We declare what we want and it's the browser's imperative to decide how to render it.

In standard HTML you can not just make up properties or Elements/tags. It might look like we're choosing to write h1 because it's convenient for us to think about a primary heading as a h1, but this is actually part of the specification of HTML.

Developers can now create their own custom elements with Javascript, but we're going to ignore that for now. Just know that it does exist but more work is required than just writing you own tag names.

What does this mean for the view! macro and which HTML elements we can use in it?

The content that we place in our view! macro is interpreted by the view! macro when the rust compiler expands it. It takes what we've provided and says, "Ok, so this is what you want... but the rest of the application can't work with this. What you've written isn't actually HTML and it's not actually Rust. I'll parse this input and rewrite it so that the rest of our application can use it, saving you from the verbosity and potentially error prone nature of writing it yourself."

In the next lesson we'll learn about making components which we can compose and how Leptos allows us to have custom components/elements while still generating HTML that the browser can parse and understand according to the HTML spec.

HTML and the view! macro

What we know

  • HTML is a specification for a domain specific language that is parsed by a web browser to render a web page
  • The web browser will do its best to render a page, ignoring or gracefully interpreting code in a page that doesn't match the HTML specification.
  • The view! macro accepts its arguments between curly braces {}, accepting a context cx and HTML like mark-up.

What we'll learn

  • Creating custom components

Where we're at

Code from our main.rs looks like this:

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <h1>"Hello, world!"</h1>  
        }  
    })  
}

We can see that after the context variable we have some HTML like text. If this were HTML, you'd have a heading that reads "Hello, world!" with the quotes displaying. However, this isn't pure HTML and the macro will process the template to remove the quotes.

View! macro syntax

The view! macro documentation is very nicely detailed with details about it's basic and advanced syntax. It's a JSX like syntax. We'll slowly touch on all of the features as we continue to learn Rust and Leptos.

For now, the important thing to remember is that strings need to be quoted.

@todo, highlight that a view contains element markup and text nodes. That's it

Adding more to a view

You can continue to add other html elements as if you were writing plain HTML. Line breaks and indentation will not break the syntax. HTML code often has a lot of line breaks or white space from code formatting. Web browsers will ignore this unless you specify that you want the white space retained. We won't go into that in this guide. Multiple space characters will get coalesced into a single space.

#![allow(unused)]
fn main() {
view! {  
	cx,  
	<h1>"Hello, world!"</h1>
	<p id="NiceAffirmation">"
		I know things are hard, 
		but I think you're doing great!"
	</p>
}  
}

Note that we have an id attribute set for the paragraph with a quoted value.

Custom elements

Recall that our Rust view! macro input is not actually HTML. It gets processed and converted into HTML. This gives us some extra freedom, like the ability to write our own custom elements in Rust with their own templates.

#![allow(unused)]
fn main() {
view! {  
	cx,  
	<h1>"Hello, world!"</h1>
	<NiceAffirmation />
}  
}

The above code will be converted into the following HTML:

<h1>Hello, world!</h1>
<NiceAffirmation></NiceAffirmation>

The browser doesn't know what to do with the tag <NiceAffirmation>, so it treates it as a generic element that doesn't do anything.

We can define a template for our "NiceAffirmation" component and have it render our as if the element existed in the HTML spec. In a sense, we can make up our own specification for our own application using domain specific component names, and then let Leptos handle the rest.

In Leptos we call custome elements components

Registering a custom element with a component function

You're probably already thinking, "I can imagine how I would want to break my application down into small components which I can compose/combine together." Thankfully, Leptos makes that exceptionally easy to do.

We do this by writing a function that returns (or evaluates to) the result of a view! macro.

In the following example we're using a pseudo-HTML component tag (our Leptos component tag) <NiceAffirmation />.

#![allow(unused)]
fn main() {
view! {  
	cx,  
	<h1>"Hello, world!"</h1>
	<NiceAffirmation />
}  
}

When the macro runs and expands the code, it'll look for a function that can be used to replace the <NiceAffirmation> tag. Leptos is magical and will do this look up for us, calling that function and embedding the the correct template as a replacement for our Leptos component tag.

#![allow(unused)]
fn main() {
#[component]  
pub fn NiceAffirmation(cx: Scope) -> Element {  
    view!{  
        cx,  
        <p>"You look nice today."</p>  
    }  
}
}

The definition of the function that will handle template generation for <NiceAffirmation />

What we know

  • The view! macro has a body encapsulated by {...} with two components, the context/scope and the template, separated by a comma.
  • The view! macro can accept:
    • Quoted text
    • HTML element tags, written in lower case
    • Custom web component element tags, written in kebab-case (i.e. my-custom-component)
    • Leptos component tags, written in PascalCase (i.e. MyCustomComponent)
  • Leptos components only know what to render in place of their component tag if we provide a function with the same name as the component (in PascalCase), with #[component] on the line directly before the function's definition.
  • Rust's basic function syntax of fn my_function_name(){}.

What we'll learn

  • Creating custom components
  • How components work

The lesson

In the previous lesson we presented the following code for a Leptos component, but we did not explain the code:

#![allow(unused)]
fn main() {
#[component]  
fn NiceAffirmation(cx: Scope) -> Element {  
    view!{  
        cx,  
        <p>"You look nice today."</p>  
    }  
}
}

Above we have a Leptos component (render function) which when called yield the result of a view! macro. Within the macro we have our standard two pieces, the context/scope, and the template mark-up.

Breakdown

The #[component] attribute and "meta programming"

Rust makes use of a special attribute syntax which the compiler can use to process your source code before it's compiled. This is often called "meta programming" because part of our application is responsible for writing another part of our application.

Library authors include features like this so that users of the library can focus on writing domain specific code (code relating to the specific problem they're solving).

The effect of this is that we, the user of Leptos, only have to worry about writing a function that tells the application what the result of rendering a component yields (returns). What we don't have to worry about is writing the code to make sure the function is called if the component is used in other view! macros.

Function definition, arguments, types, and returns

The following line in our code is a function definition:

#![allow(unused)]
fn main() {
fn NiceAffirmation(cx: Scope) -> Element {  
}

It defines the idea of doing some work with a noun (the function's name) so that we can refer to it in the context of our application. This idea of, "we know nothing until we define it," is an important concept in communication in general but especially so in programming.

Breaking down the function definition

fn - The definition starts with this keyword which is an abbreviation for function. It tells the compiler that we're about to define a term for a process/task that can be done (called).

NiceAffirmation - The name of the function in PascalCase. This name allows us to refer to the function so that if we say, "Hey computer, do NiceAffirmation," it'll know where to look up what that means. It is important to note that standard function naming in Rust is written with snake_case, all lowercase letters with words separted by underscores. Leptos components use PascalCase so that the function responsible for rendering a component will match its tag name. This deviates from standard Rust convention

(...) - Some tasks require additional "things" for the task to be carried out. I use the term things because the requirements can be varied. Some tasks may require specialized tools (other tasks/processes), some tasks may require something to be worked upon (a subject), and some tasks require anciliary information that act as a reference (reference data). Parenthesis after the function name encapsulates this required data. These are called function parameters and they are written out separated by a comma. The values passed into these parameteres are called function arguments.

cx: Scope - Each parameter listend between (and ) in a function's definition are written using a name that we can use to refer to it when doing the work in the body of the function and the classification of what it is (it's type). The parameter name exemplified here as cx is written in snake_case and the type, written here as Scope is written in PascalCase. This helps disambiguate the two. Rust requires us to know the type of everything! But when you think about it, this makes complete sense. For example, imagine if we described a task called paint_fruit_still_life. To do this work we need an artist who must be a Painter, and a subject to paint which must be Fruit. It's important to note that we're just making this stuff up. We're describing the interaction of data to the application. Programming is often about setting up relationships. We would also want to guarantee that we always expect the result of this task to be a Painting. It is up to us to define what it means to be a Painter, what Fruits are, and what a Painting is! A definition for this could look like fn paint_fruit_still_life( artist: Painter, subject: Fruits) -> Painting {}. In case of Leptos components, the first thing we're accepting is the runtime context which we give the name cx which is of the type Scope. Scope is defined by Leptos and brough into the context of our application with the previously described use leptos::* (include all *) use statement.

It's kind of fun to think about how much we imply these types in real life. If any of you have interacted with kids you can witness first hand how important it is to define the nouns we use and be clear about expectations.

-> Element - The thin arrow followed by the name of a type indicates the result of running a function or doing a task. In the case of our Leptos component, the return type is an Element. This type is defined by Leptos and imported by our previously described use leptos::* (include all *) use statement. Some functions may not have this if they do not return anything as the result of doing their work.

Function body and expressions

The body of a function is encapsulated by curly braces.{...}. This is a scope. What happens in the scope, stays in the scope. A function will return the result of evaluating the last statement of its function body. You can think of statements like sentences, only the end with semicolons. This means that the last statement without a semicolon acts as the 'final word' for what a function yields. This is why the view! macro in our example does not have a semicolon at the end. The function runs, the last expression is the view! macro, which when evaluated yields an Element. Rust allows you to cut your application short by placing the return keyboard before a statement as well.

What we learned

By defining the following function with the #[component] annotation, we can tell Leptos how to render specific HTML in place of a Leptos component tag in other view! macro's templates.

use leptos::*;

fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <NiceAffirmation />  
        }  
    })  
}

#[component]  
fn NiceAffirmation(cx: Scope) -> Element {  
    view!{  
        cx,  
        <p>"You look nice today."</p>  
    }  
}

Variables and the view! macro

What we know

  • The view! macro can be used to create html
  • The view! macro can contain custom web components (using kebab-case names and requiring at least one hypen) and Leptos components (using PascalCase)
  • Leptos components are defined by defining a function with the name of the component using a standardized function signature (parameters and return type), and adding meta data to a function so that rust will pre-process the function and turn it into a component function for you behind the scenes.
  • Leptos components can be nested in other Leptos components

What we'll learn

  • How to a number text in a variable (define a variable)
  • A introduction to types and memory safety

The lesson

In the previous lesson we presented the following code

use leptos::*;

fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <NiceAffirmation />  
        }  
    })  
}

#[component]  
fn NiceAffirmation(cx: Scope) -> Element {  
    view!{  
        cx,  
        <p>"You look nice today."</p>  
    }  
}

Adding a feature

Along with this affirmation we'd like to add some kind of lucky number for the day to go with this affirmation.

This example uses integers because they are a simple data type in Rust and a good entry point into variables and Rust's type system. It's a silly example, I know. ^.^

We are going to add some things to our code and change a few existing components. The process of splitting code up and moving it around to allow for different changes is known as Refactoring.

First, have a read through the result to see if you can spot the changes. We're using all of the same principles as before.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
           <NiceAffirmation />  
           <LuckyNunber />
        }  
    })  
}

#[component]  
fn NiceAffirmation(cx: Scope) -> Element {  
    view!{  
        cx,  
        <p>"You look nice today."</p>  
    }  
}
#[component]  
fn LuckyNumber(cx: Scope) -> Element {  
    view!{  
        cx,  
        <p>"Today's lucky number is 4"</p>  
    }  
}

Developer thoughts

Throughout these tutorials I will try to include the innner monologue that I have when thinking through a problem in the hope that it'll help you all develop your own. I also hope that the simplicity of the steps will help keep you focused when trying to work through your own problems. Don't get too far ahead of the next step in you mind. Keep this simple and break things down into small improvements. You do not need to completely solve the problem in one go. Write, review, revise, repeat.

The steps to get to the above code are as follows:

  1. I need to add a new component for the LuckyNumber, so I'll write the component with a lucky number.
  2. I need to add the component to the web page. I could make a new component called MorningGreeting, which has a NiceAffirmation and a LuckyNumber, but I stopped myself. This extra component would add complexity without adding anything beneficial just yet. I do not need to group these two Leptos components. They do not need to be separated from anything else. This is an important lesson to not prematurely cut your code apart and make things too comlicated. As a solution I'll just add the LuckyNumber component to my main function's view.
  3. I can't change the value of the lucky number. Hooray, I've outlined an improvement and a next task. I need to find a way to be able to provide a number to my component. My component needs a parameter for the lucky number which I can provide as an argument (value).

Tokens and values in view! components

We've established that we need to take the 4 and make it something that can change. We need to add a token, like a symbol, as a placeholder. We need a way of saying "use whatever we're calling the_lucky_number here."

In the view! macro we know that text input needs to be encapsulated by "..." quotes.

Values need to be encapsulated by {...} curley braces.

We'll update the following line:

#![allow(unused)]
fn main() {
        <p>"Today's lucky number is 4"</p>  
}

To look like this:

#![allow(unused)]
fn main() {
#[component]  
fn LuckyNumber(cx: Scope) -> Element {  
    view!{  
        cx,  
        <p>"Today's lucky number is " {the_lucky_number}</p>  
    }  
}

}

Note that the quoted text no longer has the number 4. Importantly, note that the token we've added after the string is encapsulated by curley braces. There's a space between string's closing quote and the token's first curley brace. This space will not be printed, it's just for ease of reading for developers.

But there's a problem. We've used the_lucky_number (an idea/thing/noun) but we haven't defined what the this idea refers to. Rust's compiler and our application doesn't understand the idea. We know it because it's in our mind, but we need to share it with the application. Writing a program is a lot like explaining something to a person who has no prior knowedge or context to understand what you're talking about. We need to define what we're talking about and what we mean.

Aside: We use shared context a lot in our lives without even knowing it. We have our own language—even slag/colloquialisms—that we use without even thinking about it. We may say, "Hey, can you put this bag in the bin?" Someone might think, "bin in my mind is defined as the garbage and they want this to be thrown out," and another might think, "by bin they mean that basket over there and they want me to put this in storage." These are vastly different outcomes! Programming is tricky because we need to be aware of how others (in this case, the computer) will interpret the meaining of the language we use. You'll also find that being aware of the importance of context and how it impacts the decoding and interpreting of meaning will make you a better communicator and will help you understan others by thinking about the context they're assuming you have when interpreting their messages.

To solve this missing and undefined context we'll write a statement that explicitly states what we mean by the_lucky_number.

Rust's syntax is very intuitive for this.

#![allow(unused)]
fn main() {
let the_lucky_number = 42;
}

Now rust knows exactly what we mean when we say the_lucky_number. In this line we're telling the compiler, "Hey rust, let the_lucky_number(the idea of a thing we're referring to as the_lucky_number) be assigned to the value 42". We can actually add even more specificity to this to tell the compiler what type of number it is.

#![allow(unused)]
fn main() {
let the_lucky_number: i32 = 42;
}

In the above we've added a type to the noun. The pattern is as follows:

#![allow(unused)]
fn main() {
let the_name_of_the_thing : the_type = the_value ;
}

We've said, "let the_lucky_number be an integer that is 32 bits in size (i32) with a value of 42". The Rust compiler will do its best to infer (figure out) the type if you don't explicitly state it. Rust will also tell you there's a problem if you've tried to assign a value that isn't a valid 32 bit integer.

The compiler will infer that when youi say 42 you don't mean a text string with the characters 42, or that you don't mean 42.0 (a floating point number).

Our updated function isn't fully there yet, but we are able to place a number, known to Rust as an integer, into the `view! template.

#![allow(unused)]
fn main() {
#[component]  
fn LuckyNumber(cx: Scope) -> Element {  
	let the_lucky_number:i32 = 42;
    view!{  
        cx,  
        <p>"Today's lucky number is " {the_lucky_number}</p>  
    }  
}
}

Rust's Type System

A mental model for understanding types

Specifying the type of something is the same as specifying the range (a group) of possible values. If we specify bool (a boolean value) as a type, the possible values are 0 and 1. This is a 1 bit.

When we state i32 as a type for the_lucky_number, we're telling the Rust compiler, numbers in this range must be between -2,147,483,648 and 2,147,483,647. These are the largest and smallest numbers that you can create with a sequence of 32 zeros and ones bits in boolean, interpreted as a single number.

The importance of a value's size

The really neat thing about computers and programs is that at the end of the day, everything is a sequence of zeros and ones. All of the things we're writing eventually get turned into bits laid out in memory.

The really brain breaking thing here—don't dwell on it too much—is that functions are also all turned into zeros and ones!

When we say that an i32 is a sequence of bits, iterpreted as a single number, we mean just that. The application, under the hood, knows that 32 bits should be grabbed from memory and interpreted as a binary number. Imagine if in that same sequence of zeros and ones you had two 16 bit i16 numbers. They would take up the same amount of space in memory (16 x 2 = 32) but they are not a 32bit number!

Our application needs to know how many bits to pick-up and read sequentially to interpret as a value. It also needs to know how much space (how many bits) are available to store data for that type.

This is one of the main features/benefits of the Rust programming language and how it lets us write safe programs. Knowledge of the size of a type allows us to safely read and write to memory.

Rust will always make sure that we can't have a situation where two 16 bit numbers get written in a block of 32 bits, and vise versa.

I realize this is complicated. Rust takes care of all of this for us. But it's important to know why we need to specify the type of a value throughout Rust, where as other languages often don't care.

Types are value constraints

Rust's type system adds constraint based on size, but it also adds it based on capability/use. We'll learn more about that later but it's important that the idea be introduced.

At the end of the day, an easy mental model to keep is that types are constraints. An untyped value could be literally any size, supporting any functionality.

It's a kin to if someone said, "I have a thing." You don't know if that thing is a sandwich that can be eated, if that thing is a feeling, or if that thing is a surprise that you'll be thrown a party on your birthday." As you can imagine, writing a progam where any idea could be any type can be tricky. We'd need to keep those types in our mind so that we don't inadvetantly try to do something with "things" that can be done to or with them.

When I think about types, I think about them as a list of possible values.

If the type is a bool it's possible values are 0 and 1. I can deal with that! If the type is i8, then I know that the value will be a number between -128 and 127.

And that really is the important thing about types as constraints for Rust. Rust wants all types to be known (or to be inferable/figure-out-able) so that there no surprises. Rust's compiler will actually highlight spots where it sees that you've accounted or some of the possible values, but not all of them. It doesn't want you to be surprised. The Rust compiler is so nice.

Leptos Component Properties

What we know

  • Components are created with specific function definitions and a [#component] function anotation.
  • Variables can be injected into view! macro templates

What we'll learn

  • How to pass values to components

The Lesson

In the previous lesson we created a component with a number, but that number is hard coded. It is static and can not change.

#![allow(unused)]
fn main() {
#[component]  
fn LuckyNumber(cx: Scope) -> Element {  
	let the_lucky_number:i32 = 42;
    view!{  
        cx,  
        <p>"Today's lucky number is " {the_lucky_number}</p>  
    }  
}
}

If we've played around with HTML or recall from earlier lessons, we might remember that HTML elements have properties, or key-value pairs of data. For example, in <h1 class="fancy">Lah dee dah</h1> we have a heading 1 element which has a class with the string value "fancy". Input elements provide a more data driven example in that <input type="number" value="42" /> is an element that as a value property, with a value of 42.

What we're going to focus on is being able to write <LuckyNumber number={42} />, actually provide it a number! We'll pass the value into the component the same as we would pass the value (argument) to a function as a property.

Step 1: Updating the component function to accept an external value as a property

We need to move our noun the_lucky_number "up and out" of our component function. It needs to be a requirement of the component. We'll need someone else to provide its value for the component to work. To do this, we'll list it as a function parameter and remove the let statement where we define it's value.

The following:

#![allow(unused)]
fn main() {
#[component]  
fn LuckyNumber(cx: Scope) -> Element {  
	let the_lucky_number : i32 = 42;
    view!{  
        cx,  
        <p>"Today's lucky number is " {the_lucky_number}</p>  
    }  
}
}

Turns into this:

#![allow(unused)]
fn main() {
#[component]  
fn LuckyNumber(cx: Scope, the_lucky_number : i32) -> Element {  
    view!{  
        cx,  
        <p>"Today's lucky number is " {the_lucky_number}</p>  
    }  
}
}

Note how we've extrated the middle bits of our let line, moving the_lucky_number : 32 into the function's parameter list). The name of the parameter is listed, followed by a colon, and the type of value that it's allowed to be.

It's worth the reminder that variable names are written in snake_case by convention.

Step 2: Update component props to pass a value to a component

Our main function had a view! macro template with <LuckyNumber /> in it. We've introduced the idea of a property called the_lucky_number in our component's definition, so we can make use of it here. We can add the property, with the same name parameter name we used in the component, and assign a value to it.

<LuckyNumber the_lucky_number=32/>

The updated main function now looks like this:

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <NiceAffirmation />  
	        <LuckyNumber the_lucky_number=32 />  
        }  
    })  
}

Leptos component dynamic content separation

What we know

  • Leptos components can accept properties and use them in their view! templates.

What we'll learn

  • How Leptos' components are able to differentiate between dynamic and static content in their templates

The Lesson

In the previous lesson we were able to pass a value as an argument to a Leptos component's property. The Leptos component's signature specifies this property as a function argument. When the application starts, Leptos expands these view! macros and creates templates. This happens once on startup. Leptos then updates the web page's document object model (DOM) through the mount_to_body function call.

The properties passed to the Leptos component have the ability to impact how the component is rendered. In the following example, the variability is visible as text inside the Leptos component's template paragraph tags.

This is all well and good by you might notice something interesting when we look at the HTML.

Observe the following Rust code, creating and using our Leptos component.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
	        <LuckyNumber the_lucky_number=12 />  
        }  
    })  
}  
  
#[component]  
fn LuckyNumber(cx: Scope, the_lucky_number: i32) -> Element {  
    view!{  
        cx,  
        <p>"Today's lucky number is " {the_lucky_number}</p>  
    }  
}

If we run trunk serve from our Rust project's directory, we'll get some prompts about our web server running. Opening the page up reveals the following HTML.

<p>Today's lucky number is <!---->12</p>

Surprisingly, the <LuckyNumber the_lucky_number=12 / > has compeltely dissolved away. This might seem shocking, given that our main function says we're mounting the LuckyNumber Leptos component to the body with a call to the mount_to_body function. The reason for that is, a view! template is not HTML.

There are a few things we'll need to go over to give you a really solid explanation of how this works and how Leptos handles dynamic content.

Leptos components and templates

Leptos components are really interesting. Their view! templates all distill down to HTML. We previously talked about HTML elements which come to life as an HTML tag, and when parsed in the document object model (DOM), become a DOM node. This is a fancy way of saying, when we write HTML, the browser reads it, tries to make sense of it, and creates a nested tree like structure that has hierarchy of the components.

Text, even though it's not an HTML element, but also be interpreted and added to the DOM. To do this, a browser creates a text node.

Leptos adds HTML comments to force the web browser to break what seems like contiguous text, into multipe text nodes. This is done by adding HTML comments which are encapsulated by <!-- and -->.

With this in mind, the HTML output that we saw before:

<p>Today's lucky number is <!---->12</p>

Creates the following paragraph node with two child text nodes.

	<p>
		Today's lucky number is <-- this is a text node
		12                      <-- this is a text node
	</p>

And we can see how it directly matches up with the view! template if we think about the static text string as being one text node, and the dyamic text which will come from the_lucky_number's value, as another.

#![allow(unused)]
fn main() {
	<p>
		"Today's lucky number is " 
		{the_lucky_number}
	</p> 
}

Leptos' ability to retain congruency of structure between the view! template and the HTML it yields allows Leptos it to know exactly which text nodes or areas in the page will be dynamic, or are subject to change.

Aside: How Leptos components deviate from expected web behaviour and custom elements

Section pending ^.^

Loops and the <For /> view! macro tag

This article is in notes status and has not been reviewed or proofed.

What we know

What we'll learn

  • How to store data in collections
  • Intro to the array data type
  • The difference between &str and String
  • Intro to the vector data type
  • How to loop over a collection
  • How to loop over a collection with the <For /> view! macro tag

The Lesson

Setup our leptos-loops application

Follow the quick reference for setting up a client side leptos application.

Adding detail to our example with data

Let's make our example a bit more fun by listing off some great cat names.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| view! { cx,  
        <h1>"Great cat names"</h1>  
        <ul>  
            <li>"Beans"</li>  
            <li>"Basil"</li>  
            <li>"Oliver"</li>  
        </ul>  
    })
}

Collections

We previously listed off our cat names and hard coded them into a template. It would be better to pull that data out into a data structure which we can then do work over (for each item in the stucture) to generate the list items. I say better because we're assuming that these items are going to change. In this lesson we're imagining that the list of great cat names will grow over time. It is completely reasonable to write out a list and be literal if you don't expect things to change.

Arrays

Rust standard library documentation

One of the most basic data structures that exists in Rust is an Array. It is a group of the same type of data, with a set quantity or storage/size. We can assign an array to a variable by wrapping a set of the same types of data in square brackets, separating each item with a comma.

#![allow(unused)]
fn main() {
let cat_names = ["Beans", "Basil", "Oliver"];
}

Rust will infer (figure out) that the data type of cat_names is [&str; 3] (the items in the array are &str and there are 3 of the). The type signature for arrays uses square brackets, encapsulating the the type of the items in teh array, followed by a semicolon, and the size of the array or quantity of items in the array.

We could have written the following as well:

#![allow(unused)]
fn main() {
let cat_names: [&str;3] = ["Beans", "Basil", "Oliver"];
}
Fixed Length

An important thing to note is that arrays are of immutable length. You can not add an item to an array because Rust has blocked off a space in memory for the array. There is no extra space to store another item. This is part of how Rust maintains memory safety. It won't allow you to just dump your data into the spot after an array to make a "bigger array".

Forced Type

The type of items needs to be the same for each item in an array. This is so that Rust knows how much memory to allocate to store the number of items. You can not do the following:

#![allow(unused)]
fn main() {
let cat_names: [&str;4] = ["Beans", "Basil", "Oliver"];
}

Rust's compiler will complain about the value you're assigning being [&str;3], because there are three items, but let cat_names: [&str;4] = will try to allocate extra space. At that point Rust won't know what is actually in that fourth space. If it can't guarantee what's there, Rust's compiler won't let the application compile. As disciplined as we all think we are, it's easy to miss checking the data at that location in memory when we use it. You might think there's a &str there but who knows! There are better ways to handle variable length sequential groups of same typed data. But, if you did want to use an Array, you could specify an option type.

#![allow(unused)]
fn main() {
let cat_names: [Option<&str>;4] = [
	Some("Beans"), 
	Some("Basil"), 
	Some("Oliver"),
	None
];
}

It's important to note that there are still 4 items, and each item is of the same type.

&str

We wrote "Beans", "Basil", and "Oliver." All three of these would often be referred to as "string values" in other programming languages. They're a series of "characters." Oh! That sounds familiar. They're a series... they're a collection of the same type. It sounds a lot like an array doesn't it?! That's because they are! But they're not characters in a way that you might think.

Let's look at what these individual values actually are to get a complete understanding of what's going on here.

We know that at the end of the day, everything in a program has to be turned into a numerical value. If you think back to being a kid (or maybe you still are ^.^) you may have written coded messages, replacing letters with numbers, A becomes 1, B becomes 2, C becomes 3, and so on. Perhaps "Beans" is a series of characters that become a series of numerical values. There are 5 characters, so maybe "Beans" is actually an array of 5 x 8bit integers, with the type signature [u8;5].

Well it turns out that there is an older system called the ASCII character set that works like this. If we were using ASCII we could represent Beans as an array of 7bit integers. The ASCII character set contains 128 characters. 7 binary bits allows us to create values of 0 ( 0000000 ) to 128 (1111111).

B,  E,  A,  N,  S
66, 69, 65, 78, 83

ASCII is pretty limited though. What if I wanted to put an emoji of beans as the name! 🫘 How do I represent that in ASCII. I can't. This makes me sad. Thankfully, there's a system called Unicode that allows us to extend the available "character set".

In Rust, all text is required to be valid Unicode or UTF-8. Rust uses str as a data structure to hold these UTF-8 "characters." I used quotes there because Unicode is more like a virtual structure of characters floating in space and characters are "code points" in thst cloud of expressive units.

You can think of str as string. The str hides the complexity of unicode so that you can just write your program, while still being able to use all 1,112,064 valid character points.

In summary str is a fixed length data strcture of code points. It is immutable. When we write &str we're referring to a string slice in Rust, which is a reference to a set of "characters," but again, the characters are code points. ^.^

Vectors

Rust standard library documention

Vectors are like arrays in that they are a sequential group of the same data type, but differ in that they are of variable length. You can create a new vector with the Vec struct's new static method.

#![allow(unused)]
fn main() {
let cat_names = Vec::new();
}

At this point there is no data type asigned to the Vector. We can specify the type in advance.

#![allow(unused)]
fn main() {
let cat_names: Vec<&str> = Vec::new();
}

Rust will infer the type if we use the from static method using an array.

#![allow(unused)]
fn main() {
let cat_names = Vec::from(["Beans", "Basil", "Oliver"]);
}

Or we can let Rust infer the type. It will pick up the type of the first item stored in the vector and use that as the requirement for all future additions. If you want to edit a vector you'll need to make it mutable though.

#![allow(unused)]
fn main() {
let mut cat_names= Vec::new();  // no internal type
cat_names.push("Beans");        // now the internal type is &str
cat_names.push("Basil");  
cat_names.push("Oliver");
}

Vectors can also be made using the handy vec! macro:

#![allow(unused)]
fn main() {
let cat_names= vec!["Beans","Basil", "Oliver"];
}

Updating our HTML to use collections

Let's update our example application to store our cat names in a Vec using the vec! macro.

use leptos::*;  
  
fn main() {  
    
    let cat_names = vec!["Beans","Basil", "Oliver"];
    
    mount_to_body(|cx| view! { cx,  
        <h1>"Great cat names"</h1>  
        <ul>  
            
        </ul>  
    })
    
}

We need to fill in that gap in the unordered list tags <ul>...</ul> with our list items.

Let's imagine that there's a function called list_names which will give us a Vector of Views. To make views, we need a context, and to list names we actually need the names. This tells us our two arguments to the function. We need to wrap this in curly braces so that it gets executed.

fn main() {  
  
    let cat_names= vec![  
        "Beans",  
        "Basil",  
        "Oliver"  
    ];  
  
    mount_to_body(|cx| view! { cx,  
        <h1>"Great cat names"</h1>  
        <ul>  
            {list_names(cx,cat_names)}  
        </ul>  
    })}

Now we need to write the function. I like to start with the signature. Let's make sure we know what it's accepting and what we want to get out of it. The body of the function is where we connect the dots.

#![allow(unused)]
fn main() {
fn list_names(cx: Scope, cat_names: Vec<&str>) -> Vec<View> {
	// STUFF HERE
}
}

If we want to do something to a Vector in Rust we'll need to turn it into an iterator. We can call the method into_iter() to turn our current evaluated value into that special iterator.

#![allow(unused)]
fn main() {
	cat_names
		.into_iter()
}

Next we want to do something to each item. We want to change each &str into a view. Map is a method that can be called on an interator that will apply to each item. It accepts a closure which operates on each item. Its a function in a mathematical sense!

Our first step is turning each one of these into a view.

#![allow(unused)]
fn main() {
cat_names  
    .into_iter()  
    .map(  
        move |name| view!{cx, <li>{name}</li>}
	)
}

Note that if we have a single line of code, we don't need to add curley braces to define the body of a closure. Move is necesary because &str values don't support copy. If we're using name into the view! macro, it will love out of its iterator.

The second step is turning these into an actual View.

#![allow(unused)]
fn main() {
cat_names  
    .into_iter()  
    .map(  
        move |name| view!{cx, <li>{name}</li>}  
	)
	.map(  
	    move |li| li.into_view(cx)  
	)
}

We now have an iterator with Views that we need to convert into a Vector of Views. We can call collect on the iterator to turn it back into a "collection" which is our Vector.

#![allow(unused)]
fn main() {
fn list_names(cx: Scope, cat_names: Vec<&str>) -> Vec<View> {  
    cat_names  
        .into_iter()  
        .map(  
            move |name| view!{cx, <li>{name}</li>}  
        )
		.map(  
            move |li| li.into_view(cx)  
        )        
        .collect()  
}
}

We don't have a semicolon at the end of collect(). That makes the result of collect() the final statement/expression and the returned value of the function.

At this point we're going to get some issues with lifetimes. Rust will complain because we're passing a Vector of references into a function, and getting back a Vector of Views that used the reference. Rust can't guarantee that the references to those string slices will live as long as the Vector of Views that they generate through this function.

We can specify a static lifetime for the string slices which is the same lifetime as the Views, solving our problem.

#![allow(unused)]
fn main() {
fn list_names(cx: Scope, cat_names: Vec<&'static str>) -> Vec<View> {  
    cat_names  
        .into_iter()  
        .map(  
            move |name| view!{cx, <li>{name.clone()}</li>}  
        )        
        .map(  
            move|li| li.into_view(cx)  
        )        
        .collect()  
}
}

Here's what our final application looks:

use leptos::*;  
  
fn main() {  
  
    let cat_names= vec![  
        "Beans",  
        "Basil",  
        "Oliver"
    ];  
  
    mount_to_body(|cx| view! { cx,  
        <h1>"Great cat names"</h1>  
        <ul>  
            {list_names(cx,cat_names)}  
        </ul>  
    })}  
  
fn list_names(cx: Scope, cat_names: Vec<&'static str>) -> Vec<View> {  
    cat_names  
        .into_iter()  
        .map(  
            move |name| view!{cx, <li>{name.clone()}</li>}  
        )        
        .map(  
            move|li| li.into_view(cx)  
        )       
         .collect()  
}

What if static lifetimes aren't an option?

If you can't use static lifetimes, you can convert the &str (string slices) into owned strings, of type String.

I used three different approaches to converting a string slice &str into a String. You can call to_string() on the slice's value, you can call a method on the String struct, or you can call into.

Then we need to update the list_names parameter type from Vec<&'static str> to Vec<String>

use leptos::*;  
  
fn main() {  
  
    let cat_names = vec![  
        "Beans".to_string(),  
        String::from("Basil"),  
        "Oliver".into()  
    ];  
  
    mount_to_body(|cx| view! { cx,  
        <h1>"Great cat names"</h1>  
        <ul>  
            {list_names(cx,cat_names)}  
        </ul>  
    })}  
  
fn list_names(cx: Scope, cat_names: Vec<String>) -> Vec<View> {  
    cat_names  
        .into_iter()  
        .map(  
            move |name| view!{cx, <li>{name.clone()}</li>}  
        )        
        .map(  
            move|li| li.into_view(cx)  
        )        
        .collect()  
}

Inline closures

In the above example we used a function to generate the Vector of Views. Keep in mind that this is happening in a closure that is being passed to the mount_to_body function. The scope variable cx does not exist outside of this! As a result, we can not create closures outside of the mount_to_body closure. If we crested an app component we would be in a state where the scope exists. It would open up more clean syntax.

use leptos::*;  
  
fn main() {  
    let cat_names = vec![  
        "Beans",  
        "Basil",  
        "Oliver"  
    ];  
  
    mount_to_body(|cx| view! { cx,  
        <ListNames cat_names/>  
    })}  
  
#[component]  
fn ListNames(cx: Scope, cat_names: Vec<&'static str>) -> impl IntoView {  
  
    let list_items: Vec<_> = cat_names  
        .into_iter()  
        .map( move |name| view!{cx, <li>{name}</li>} )  
        .collect();  
  
    view! {cx,  
        <h1>"Great cat names"</h1>  
        <ul>  
            {list_items}  
        </ul>  
    }
}

Note, you can not inline the cat_names.into_iter().etc in place of {list_items} in the view macro because of how macros are processed. The variables need to be evaluated in advace in this case.

A subtle but important difference in the above example is that we have a Vector of Leptos::HtmlElements. We don't have to concretely specify their type, which allows us to set the type of list_items to Vec<_> meaning Vector of...it doesn't matter because we never need to check the concrete type. When we pass the Vec<HtmlElement<?> into the view! macro via {list_items}, the macro checks against HtmlElements being valid values. We can skip declaration of th e concrete types in this situation.

By contrast we needed a concrete type for a function definitions return type, which is why we used Vec<View> earlier. It wouldn't be possible for us to say <Vec<HtmlElement<?>>.

Tradeoffs with using iterators without keys

This is a fine approach for simple applications, but it starts to break down when you're using signals and concerned about performance. If cat_names was a signal (a read signal which updates views when it is updated), we'd end up re-rendering the whole list of names. This isn't ideal.

Using the <For /> view! macro tag

The view! macro has a helper called <For /> that will do a lot of this leg work for us. The benfits of using <For > is that Leptos will associate a key with each item output. The key allows Leptos to target granular updates to avoid rerendering the whole list.

There are three properties that we must assign for the <For /> tag.

  1. each: A closure that returns the collection. The collection must support the ability to be converted into an interable. Vectors and Arrays work perfectly fine here.
  2. key: A closure that will return a value that can be used as an identifier for a given item
  3. view: A closure that will return the view for a given item.

Ownership can be tricky with <For />. We'll need to clone the names for the iteration so that we can guarantee that the references don't go away. We need to do something similar for the key closure. We need to clone the returned value. If we didn't, the value would get dropped.

use leptos::*;  
  
fn main() {  
    let cat_names = vec![  
        "Beans",  
        "Basil",  
        "Oliver"  
    ];  
  
    mount_to_body(|cx| view! { cx,  
        <ListNames cat_names/>  
    })
}  
    
#[component]  
fn ListNames(cx: Scope, cat_names: Vec<&'static str>) -> impl IntoView {  
  
    view! {cx,  
        <h1>"Great cat names"</h1>  
        <For  
            each={ move || cat_names.clone()}  
            key={ |name| name.clone()}  
            view={ 
	            move |name| {  
	                view! {  
	                    cx,  
	                    <li>{name}</li>  
	                }            
				}     
			}   
		/>    
	}
}

With more complex data stuctures

use leptos::*;  
  
#[derive(Clone)]  
struct CatName {  
    name: String,  
    rating: u8,  
}  
  
fn main() {  
    let cat_names = vec![  
        CatName { name: "Beans".to_string(), rating: 1 },  
        CatName { name: "Basil".to_string(), rating: 2 },  
        CatName { name: "Oliver".to_string(), rating: 3 },  
    ];  
  
    mount_to_body(|cx| view! { cx,  
        <ListNames cat_names/>  
    })
}  
  
#[component]  
fn ListNames(cx: Scope, cat_names: Vec<CatName>) -> impl IntoView {  
    view! {cx,  
        <h1>"Great cat names"</h1>  
        <For  
            each={ move || cat_names.clone()}  
            key={ |cat_name| cat_name.rating}  
            view={  
               move |cat_name| {  
                   view! {  
                       cx,  
                       <li id={cat_name.rating}>
	                       {cat_name.name}
					   </li>  
                   }    
			   }
			} 
		 />   
	 }
 }

Conditional display and the <Show> view! macro tag

This article is in notes status and has not been reviewed or proofed.

What we know

What we'll learn

  • Techniques for conditionally displaying data
  • Using the <Show> view! macro tag

The Lesson

Setup our leptos-loops application

Follow the quick reference for setting up a client side leptos application.

The idea of conditionals and control flow

Programs take in data, process it or work over it, and yield a result. This result may be a change to the data which is returned, or some action impacting the world outside of the application like writing a file, sending an email, etc.

It's common to have branches in applications where the result of running the application may change depending on its input or the conditions in which it is run.

(Aside) Programs as Functions: I say application but this applies to funtions as well. Applications are untimately a large function with complex inputs and outputs. The idea of fn main() hints at the truth behind the fractal nature (repetition of a pattern at difference scales as you look into something) of programs as functions.

To help solidify your understading, you can think of these two situtions in the context of these examples:

  1. Linear: Like a regular book or story based video game. You read it or play it once and the result is always the same. It is consistent.
  2. Branching: Like a choose your own adventure book or role playing video game where the actions you take dictate the outcome.

As you can imagine, having variability of output or result based on these conditions, with the ability to switch paths, can add complexity in your application. The term Cyclomatic complexity refers specifically to that. We try to keep the number of branches as low as possible. We do this because functions with linear behaviours of input to output are easier to reason about, are more predictable, and as a result less prone to bugs.

In this lesson we will be specifically looking at how to create branches in our code, and as a result, in our UI.

If statements

Rust provides a variety of control flow syntax to tell the compiler which parts of our code should be used, or which path to take through it as it runs. The most basic syntax for this uses the keyworld if followed by a statement that evaluates to a boolean value, which is true or false.

The syntax is very simple:

#![allow(unused)]
fn main() {
let loves_cats = true;  

if loves_cats {  
    leptos::log!("Hooray");
}
}

The pattern is if followed by a condition which we call a predicate. In this example it is loves_cats and then a scope which will run "predicated" on (depending on) the condition being true.

Conditional values in the view! macro

Let's see if we can use this in our view! macro to conditionally display some text. We want our message to print out if a condition is true. You can imagine that your applications conditions are more dynamic and interesting. We're hard coding the condition here so that the example is consistent and easy to follow.

We want to do something like this, but we only want the message "Hooray. They love cats!" to print if they actually do.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            "Hooray. They love cats!"  
        }    
	})
}

We'll refactor this a bit, moving the string out of there into a variable.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        let message = "Hooray. They love cats!";
        view! {  
            cx,  
            {message}  
        }    
	})
}

But we want that to be changed on a condition. We want the message to be empty, but if they love cats, then it should contain our "Hooray. They love cats!" text.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        
        let loves_cats = true;
        
        let message = "";
        
        if loves_cats {
	        let message = "Hooray. They love cats!";
        }
        
        view! {  
            cx,  
            {message}  
        }    
	})
}

This looks like it should work, but it doesn't.

Rust's compiler is smart. It's very smart. It wants to make sure that we don't leave memory allocated that isn't being used. To make sure that unused variables are freed up safely it follows a rule.

Any variables used in a scope (encapsulated with curly braces {...}) will have their memory freed (Rust's compiler calls this Dropping) at the end of the scope. Variables used in the scope are those moved into it or allocated/defined in it. The only values left are those written as the last statement in the scope, which is the evaluated value of the whole scope/code block.

Here's what's happening in secret.

#![allow(unused)]
fn main() {
let message = "";
        
if loves_cats {
	// Let's assign message to a value
	let message = "Hooray. They love cats!";
	// We're at the end of the scope. 
	// Let's clean up `message`.
}
}

We can solve this problem by making message mutable, adding the mut keyword after let.

We can then reassign message through the mutable reference that gets moved into the if statement's scope. Not that it's important that message be assigned an empty string value so that it is initialized.

#![allow(unused)]
fn main() {
// make message mutable
let mut message = "";
        
if loves_cats {
	// A mutable reference is used behind the scenes
	// and `moved` into this scope. 
	message = "Hooray. They love cats!";
	// We're at the end of the scope. 
	// The mutable reference is cleaned up
	// The owned value of message was never 
	// actually moved, and still exists outside 
	// of the `if`
}
}

We now have a conditional that works, displaying a message if the predicate is true in our UI.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
  
        let loves_cats = true;  
        let mut message = "";  
        if loves_cats {  
            message = "Hooray. They love cats!";  
        }  
  
        view! {  
            cx,  
            {message}  
        }
	})
}

If in the view! macro

What if we wanted to bring this inline, in our view! macro?

Here's where things get interesting.

There is an interesting little trick here with Rust's if statements. If we recall, Rust is an expression language and expressions evaluate to the last statement. You can think of it as the "final word."

In the example below, the block of code for the if statement evaluates to a unit type. It's almost like Rust's take on null, written as ().

#![allow(unused)]
fn main() {
let loves_cats = true;  

if loves_cats {  
    let message = "Hooray. They love cats!"; 
    //                                     ^
    // semicolon means this isn't 
    // the last thing left. It has ended
    // The final word here has nothing,
    // which represents as the () or unit type
}
}

So really, the whole if statement ends up evaluating to a unit type.

#![allow(unused)]
fn main() {
if loves_cats {  
    let message = "Hooray. They love cats!"; 
    // Invisible unit type gets added here 
    // ()
}
}

If we drop the semicolon, it will evaluate to a &str, the last expression in the scope!

#![allow(unused)]
fn main() {
if loves_cats {  
    "Hooray. They love cats!" 
}
}

Let's replace our message variable with the inline if conditional statement

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
  
        view! {  
            cx,  
            {
				if loves_cats {  
                    "Hooray. They love cats!"  
                }  
            }
		} 
	})
}

Now this looks like it should work! But it doesn't, and there's good reason for it. The if statement evaluates to a &str (string slice) value if it is true. But what about if it's false? In this case, it would be a unit type and that isn't valid input for the view! macro.

We can add a else block with an empty string slice to solve this problem.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
  
        view! {  
            cx,  
            {
				if loves_cats {  
                    "Hooray. They love cats!"  
                } else {
	                ""
                }
            }
		} 
	})
}

You can also replace this with a match statement for a bit more clarity.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
  
        let loves_cats = true;  
        view! {  
            cx,  
            {               
	             match loves_cats {  
                    true  => "Hooray. They love cats!",  
                    false => ""  
                }  
            }   
		}    
	})
}

The <Show> tag in the view! macro

Leptos provides a conditional tag to make this a bit more straight forward. The show tag also offers some optimizations. It will not processes branches that are already active if no changes have been made. Raw if statements will evaluate their predicate and evaluate their success scopes each time. More details can be found in the official documentation.

The <Show> tag requires two properties, both of which are closures.

  • when: A closure for the predicate. It will be run to see if it is true or not. If true, the children of the <Show> tag will be printed.
  • fallback: A closure that returns what to display if the predicate is false. This is like the else branch.
use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
  
        let loves_cats = true;  
        view! {  
            cx,  
            <Show  
                when=move || loves_cats  
                fallback=|_| "Give it time"  
            >  
                "Hooray. They love cats!"  
            </Show>  
        }    
	})
}

It's important to note that this can also be written with curly braces in the closures to make them more clear. Here's an example for the sake of familiarity.

#![allow(unused)]
fn main() {
<Show  
	when=move || { loves_cats }  
	fallback=|_| { "Give it time" }
> 
	"Hooray. They love cats!"  
</Show>  
}

The fallback will send a context along with it, allowing you to return views.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
  
        let loves_cats = true;  
        view! {  
            cx,  
            <Show  
                when=move || loves_cats  
                fallback=|cx| view!{cx,"Give it time"}  
            >
				"Hooray. They love cats!"  
            </Show>  
        }    
	})
}

Tables and data sets

Reserved Tags

Some tags are uses by Leptos for special functions.

  • <Show>
  • <Suspense>
  • <Transition>

leptos_router

  • <Router>
  • <Routes>
  • <Route>
  • <Outlet>

leptos_meta

  • <Html>
  • <Body>
  • <Link>
  • <Meta>
  • <Script>
  • <Style>
  • <Stylesheet>
  • <Title>

Introduction

Witnessing events

Leptos components updating from events

What we know

  • Leptos components are templates that can be added to a web page's document object model DOM as nodes, with separate text nodes for dynamic data used in text strings.
  • Leptos components can have fuctions run in response to events on a given component.
  • wasm-bindgen and other supporting crates work as bridges between our Rust code and the browser's JavaScript runtime.

What we'll learn

  • Updating the DOM in response to events.

The Lesson

In our previous example we created a silly Leptos component that displays some text and has a button that, when clicked, echoes a number to the console.

We'll take this example and and instead of echoing 42 the console, we'll replace the lucky number with it.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <LuckyNumberCounter the_lucky_number=12 />  
        }  
    })  
}

#[component]  
fn LuckyNumber(cx: Scope, the_lucky_number: i32) -> Element {
	let whisper_in_the_console = |_|{  
	    web_sys::console::log_1(&42.into());
	};  
	view!{  
	    cx,  
	    <div>  
	        <p>"Today's lucky number is " {the_lucky_number}</p>  
	        <button on:click=whisper_in_the_console >
		        "Your Secret Lucky Number"
			</button>  
	    </div>  
	}
}

First, let's adjust the names of some of the event callback and put a placeholder into the event handler (or callback) body.

#![allow(unused)]

fn main() {
#[component]  
fn LuckyNumber(cx: Scope, the_lucky_number: i32) -> Element {
	let update_the_number = |_|{  
	    // This functionality is unknown.
	};  
	view!{  
	    cx,  
	    <div>  
	        <p>"Today's lucky number is " {the_lucky_number}</p>  
	        <button on:click=update_the_number >
		        "I need a more lucky number"
			</button>  
	    </div>  
	}
}
}

We know from previous lessons that Leptos component templates are setup statically when Leptos starts up. Given that this is the case we may ask ourselves, how can we have dynamic content if the template is static?

The secret to solving this problem involves combining two features of Leptos, one which we've seen before and one which is new.

  1. How Leptos separates static and dynamic content
  2. Signals which can be converted into data

1) Separation of static and dynamic template components

We saw from the previous example that Leptos differentiates parts of our template from components that are variable. By doing this, it can zip together data that changes with the static template that does not change.

#![allow(unused)]
fn main() {
<p>
	"Today's lucky number is " {the_lucky_number}
</p>  
}

The above could be interpreted as follows:

[STATIC DATA] {DYNAMIC DATA} [STATC DATA]
[<p>"Today's lucky number is "]  {the_lucky_number} [</p>]

By doing this Leptos can reuse the template while leaving a holes, like fields in a form, for variable data.

2) Signals

The problem with our Leptos component is that the_lucky_number is a variable who's value is defined outside of its component template. It's value is provided when the whole system starts up and the component is mounted to the body as shown in the "fn main()" main function.

Unfortunately our variable the_lucky_number doesn't have an opportunity to be updated or changed. It's been used in our template and it has been consumed. Rust has some very intereting rules about data.

The idea of movement and scope

In a lot of programming languages, you can pass data into a function and then also use it elsewhere. In Rust if you pass data into a function it's considered to have been moved into the function. It has left the scope, or the space, which you were operating.

For example, consider a time where I gave my friend a sandwich and asked them to paint a picture of it for me—it was a beautiful sandwich. If I gave the sandwich to my friend, I no longer have it. It's been moved into their hands. They may give it back to me, but until then, I won't have it. If you're working in Rust and see statements like, "such and such has moved," this is what that means.

If Rust can make a copy of the data, it'll do so to get around the issue of moved values, but that only happens with simple data types like numbers. There are exceptions and a lot to explain with what is called the copy trait, but we won't get into that here. The idea of 'copy' is important to how Leptos allows dynamic content though.

Knowing what we are actually moving

Let's go back to this sandwich example. Perhaps I don't want to relinquish my sandwich. I could provide a reference to it and my friend could look at it to make the painting. Though, they wouldn't be allowed to touch it. They are only allowed to look at it. In this case, I'm not losing my sandwich, but Rust will prevent me from changing it while someone is referencing it. Rust won't allow me to take a bit of my sandwich while I told my friend they could look at it to make the painting. Independent of how hungry I am. I could actually set up a plinth, place my sandwich on top, and allow a whole class of artisans to paint my sandwich.

Rust also allows me to loan out my sandwich by providing a mutable reference, but if I do that, I can't touch it, and no one else is allowed to reference it. It would be as if I told my friend, "You can paint my sandwich and organize the lettuice and tomato so that it makes a nice composition." No one could safely paint that sandwich because my friend might still be moving parts of it around.

References in rust are achieved by adding an ampersand before a value. 42 is a number. &42 is a reference to that number. We can dereference or follow the reference to the original by placing an asterisk before a variable containing a reference. We'll expand on this later and explain how you references, and when to use them.

To summarize, the rules are:

  • We can move an owned value (the sandwich)
  • We can create and move one or more references to the sandwich
  • We can create one single mutable reference, but we can not also have regular references if we do

The borrow checker

The above two concepts are key parts of what we call the Rust borrow checker. The purpose of the borrow checker is to make sure that our system (application) has predictable access to data. To do this, it tracks where we move things and how we reference them, to guarantee that we haven't inadvertantly written something stupid that will break our program or create secrity vulnerabilities. And trust me, we will write things like that. The borrow checker is your friend and asks you to do your best work. You will learn to appreciate how amazing it is in time.

Now that we know this, we can see how what felt like a simple problem to solve is actually pretty complicated. If we move a value into a component, it's gone. We can't update a value that doesn't exist as a result of some event. It might take some time to wrap your mind around this idea. It'll feel uncomfortable at first.

Leptos' solution

What we really need is some sort of special variable. We need something that we can put in the template which can be notified when its value changes, and something that can transparently act as its value.

Imagine if we had a wearhouse of data who we could call and ask for data. "Hey, I need the value of Aisle 2 bin 4." If we had the location of the data, we could always ask the wearhouse for whatever is stored there.

Or what if we could ask them to store something and they'd do so, responding with its location in the wearhouse. "Can you store this gigantic novelty taco beanbag chair for me?" we'd ask. "Sure, and it's in aisle 2 bin 5," they'd respond.

This is what signals do. Signals are a formalized way of being able to communicate with the wearhouse (which in the context/scope in Leptos) to store data and retrieve data. When data changes, leptos can follow where it is being used, and update those usages accordingly.

I introduced the idea of copy earlier because the signals are actually indexes, storage positions in the context, which will be duplicated as you use them. This allows you to move a signal into a closure which will be handling an event while still using it in the view template.

Reactivity in action

To create a signal, we need to call the function create_signal() and provide a scope (or context) as the first argument, and the default value as the second. It returns a tuple, a set of two values, which we can immediately give names to so that we can use them in the scope of our function. The first part of the signal allows us to retrieve a copy of the value from the wearhouse. The second part of the signal allows us to set the value at the signal's location.

#![allow(unused)]
fn main() {
let (value, set_value) = create_signal(cx, the_lucky_number);  
}

The above is called destructuring. We could also have written in the long form way but it is actually harder to read and requires additional temporary assignments like lucky_number_signal.

#![allow(unused)]
fn main() {
let lucky_number_signal = create_signal(cx, the_lucky_number);  
let value = lucky_number_signal.0;
let set_value = lucky_number_signal.1;
}

Note that .0 and .1 are properties on the lucky_number_signal. They're indexes for the first and second component in the tuple.

Now that we have a signal we can update our callback and move the set_value signal into it. Not the addition of the move keyword before the closure's pipes which encapsulate it's properties, and the underscore which denotes that it will be provided a property when called, but we won't be using it.

#![allow(unused)]
fn main() {
let update_the_lucky_number = move|_|{  
        set_value(42);  
    };  
}

And in the view template we can update our previous value with our signal which can be used to derive the value.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <LuckyNumber the_lucky_number=12 />  
        }  
    })  
}  
  
#[component]  
fn LuckyNumber(cx: Scope, the_lucky_number: i32) -> Element {  
    let (value, set_value) = create_signal(cx, the_lucky_number);  
  
    let update_the_lucky_number = move|_|{  
        set_value(42);  
    };  
    view!{  
        cx,  
        <div>  
            <p>"Today's lucky number is " {value}</p>  
            <button on:click=update_the_lucky_number >"Pick a better number"</button>  
        </div>  
    }  
}

The coolest part about this is that the signal is responsible for updating itself on the web page if it's value changes. LuckyNumber doesn't run again to create a new template. Leptos updates that special little text node, where {value} is used.

Event handers as props

Event Bubbling and Signal Generics

What we know

  • We can monitor activity in the browser by responding to events
  • We can add functions that run when events happen to Leptos components. These functions are called event handlers. Event handlers added to Leptos components are also added to their DOM node counterparts and connected behind the scenes with Leptos' use of wasm_bindgen making their use transparent.
  • The syntax for an event handler is similar to adding a property to a component, but with a prefix of on: followed by the event name. e.g. <LeptosComponent on:click=my_event_handler />.
  • Event handlers (event callbacks) are closures (a one time function that encapsulates the values used in it) that are assigned to a variable. e.g. let my_event_handler = |event|{ ... }.
  • the move keyword that can preceed a closure's parameters, indicating that variables used in the closure's function body will be moved into the closure itself and removed from the current scope as if they were passed into a function. Variables that are types that support the Copy trait will automatically be copied and will still be available in the current scope.
  • Signal read and write components support Copy.

What we'll learn

  • How events can be captured in parent components
  • What generics are in Rust's type system at an introductory level

The Lesson

Caveat: The following lesson is intended to show you an overview of a pattern to respond to events which are emmitted by a component's children. This is not a complete patter. A description of the tricky spots exists at the end of this lesson.

We've established that the document object model (DOM) is a tree like representation of DOM nodes which is a browsers data structure containing information about what's on a web page. When events happen in a browser, the event will triggered at the lowest, most specific, DOM node. That event will bubble up until it's handled or prevented from continuing. Bubbling up means that the original event will be given the opportunity to handled by the originating element's parents, one at a time, until it reaches the top of the DOM tree.

If we take the following HTML:

<html>
	<body>
		<div id="application">
			<div class="button-container">
				<button>Click me</button>
			</div>
		</div>
	</body>
</html>

Clicking on the click me button would create the intial event. on:click events handlers on this element will run first. Then, on:click handlers for the ".button-container" would run, and so forth.

This means that you can place handler logic on a parent component that has multiple children who emit events. For example, you could use this way of thinking to run a validation script on a form any time any input field is changed. Or, imagine if you wanted to capture any click as some form of analytics. You could setup a click handler in your main app which will capture all events that bubble up to it.

Using bubbled events to update Leptos component properties

The following is an example of how we can move the lucky number value's handler out of the component and into a new Leptos component we're calling RadApp.

To start, we'll create a new component called RadApp, add it to the mount_to_body view!, and setup our LuckyNumber component as a child.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <RadApp />  
        }  
    })  
}  
  
#[component]  
fn RadApp(cx: Scope) -> Element {  
    view!{  
        cx,  
        <LuckyNumber the_lucky_number=12 />  
    }  
}  
  
#[component]  
fn LuckyNumber(cx: Scope, the_lucky_number: i32) -> Element {  
    view!{  
        cx,  
        <div>  
            <p>"Today's lucky number is " {the_lucky_number}</p>  
            <button>"Pick a better number"</button>  
        </div>  
    }  
}

LuckyNumber has a button that we want to activate. That event will bubble up so we can put the on click handler on the Leptos component instead of on the button, like we did previously.

Let's add the on:click and we'll use the leptos log macro to write a message to the browser console.

#![allow(unused)]

fn main() {
#[component]  
fn RadApp(cx: Scope) -> Element {  
	let update_the_lucky_number = |_|{  
	  leptos::log!("We should be updating the lucky number");  
	};
	  
    view!{  
        cx,  
        <LuckyNumber on:click=update_the_lucky_number the_lucky_number=12 />  
    }  
}  
}

Rust's compiler may complain saying cannot find type MouseEvent in this scope followed by consider importing this struct:

#![allow(unused)]
fn main() {
use crate::web_sys::MouseEvent;
}

You can literally copy and paste this into your main.rs file right after use leptos::*.

Now we need to create our signal so that we can read and update the data over time. We need to register it in our scope. `

#![allow(unused)]
fn main() {
#[component]  
fn RadApp(cx: Scope) -> Element {  
	let (value, set_value) = create_signal(cx, 12);
	let update_the_lucky_number = |_|{  
	  leptos::log!("We should be updating the lucky number");  
	};
	  
    view!{  
        cx,  
        <LuckyNumber on:click=update_the_lucky_number the_lucky_number=12 />  
    }  
}  
}

We might intuitively think, "Hey, we can just put value where the number 12 previously was as a property of LuckyNumber," like this:

#![allow(unused)]
fn main() {
 <LuckyNumber on:click=update_the_lucky_number the_lucky_number=value />  
}

But this won't work. There's a problem. Value is a ReadSignal, and our property is supposed to be a 32 bit integer. We can see this in the function definition of the LuckyNumber component.

#![allow(unused)]
fn main() {
fn LuckyNumber(cx: Scope, the_lucky_number: i32) -> Element {
}

the_lucky_number is supposed to be any 32 bit integer, denoted by i32.

Rust's compiler will actually give you an error showcasing what was expected and what it received:

 note: expected type `i32`
       found struct `ReadSignal<{integer}>`

All of these errors will appear in the terminal that you typed trunk serve in.

The type ReadSignal<{integer}> probably looks a little bit weird to you. You might ask yourself, why is there a bunch of stuff after the type's name? What does <{integer}> mean?

Recall that functions have parameters which follow the function the function name and are encapsulated by parenthesis.

#![allow(unused)]
fn main() {
// this is pseudo code to show you the structure of the signature
fn function_name(parameter_name: SomeType)
}

Types have parameters called generics which follow the type name and are encapsulated by angle brackets.

#![allow(unused)]
fn main() {
SomeType<SomeGenericType>
}

Generics allow us to configure a type with additional types.

For example, let's say that we have a bunch of containers and we're preparing our lunch for the day. We can store things in all of the containers and we can eat the contents from each container. All of the containers, though different sizes and colours, share the same type. They're containers. But some containers may contain liquids and others will contain solids. If we were to eat or drink from one of the containers we could use the type system to guarantee that we wouldn't try to drink our sandwich or chew our milk by using parameter types like, Container<Solid> and Container<Liquid> respectively.

If we pop back over to Leptos, we can see how the context (scope) is similar. If we think about our wearhouse that we use to store and retrieve value from, we need some way to embed what type of values those are.

If we create_signal with an integer like an i32, we're saying that the ReadSignal is actually ReadSignal<i32>. This tells rust, "Hey, this ReadSignal works like any other read signal, but when you get the contents out of it, it'll absolutely be a valid i32".

#![allow(unused)]
fn main() {
#[component]  
fn LuckyNumber(cx: Scope, the_lucky_number: ReadSignal<i32>) -> Element {  
    view!{  
        cx,  
        <div>  
            <p>"Today's lucky number is " {the_lucky_number}</p>  
            <button>"Pick a better number"</button>  
        </div>  
    }  
}
}

We need to update our RadApp component to pass the signal to the component as well. Our whole working example looks like this.

use leptos::*;  
use web_sys::MouseEvent;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <RadApp />  
        }  
    })  
}  
  
#[component]  
fn RadApp(cx: Scope) -> Element {  
    let (value, set_value) = create_signal(cx, 12);  
    let update_the_lucky_number = move|_|{  
      set_value(42)  
    };  
    view!{  
        cx,  
        <LuckyNumber on:click=update_the_lucky_number the_lucky_number=value />  
    }  
}  
  
#[component]  
fn LuckyNumber(cx: Scope, the_lucky_number: ReadSignal<i32>) -> Element {  
    view!{  
        cx,  
        <div>  
            <p>"Today's lucky number is " {the_lucky_number}</p>  
            <button>"Pick a better number"</button>  
        </div>  
    }  
}

An important caveat when capturing child node events

One very important thing to note here is that ANY click events within the <LuckyNumber /> component will trigger the on:click event handler.

When we placed the event handler on the button itself, it was locked to that button.

When we place the hander on the parent, all clicks bubble up to it.

The distinction is important because while clicking on the button if there is one button yields the same behaviour, there are some subtle differences you shuold be aware of.

  1. Clicking anywhere will trigger the on:click handler
  2. The event handler does not differentiate between buttons if one or more existed.

The above example and lesson is not suitable in most cases but it a good simple example of capturing an event outside of its source. We will go into detail about how to filter the child events, prevent further bubbling, and so forth in later lessons.

Event delegation and bubbling

Adding event listeners to DOM nodes has non trivial overhead. Leptos solves this problem with a clever optimization. It registers one top level handler for each event type and attaches this event handler to the Window DOM node.

Event handlers that we register in Leptos get added to a list of handlers for that event type. When an event fires in the browser, it bubbles up to the Window and is handled by the top level handler (created by Leptos as part of the aforementioned optimization). Events include a path component which Leptos can use to walk the DOM tree and fake bubble the event through its ancestors (the parts of its path).

Custom events created through web_sys do not bubble by default and will not be able to reach the Window from it's origin. For this reason, you need to ensure that custom events bubble so that they reach the Window and Leptos can handle them, delegating to the hander that you wrote.

From the author of Leptos: Leptos uses event delegation to make the creation of DOM nodes faster. This means that rather than attaching event listeners to individual HTML elements, the framework adds a single event listener to the page per event type (like click or change), and calls the handlers you define by looking them up manually. This adds a small increase in Wasm binary size in exchange for faster rendering times.

Custom Events

What we know

  • One of the key ways applications change data over time is in response to stimulus. We can witness these changes through a browser's runtime through the browser's event system.

What we'll learn

  • Which events exist and/or are supported
  • Create custom events
  • Tips on how to read Rust documentation
  • The dangers of complexity and trying to think about simplicity
  • An introduction to structs, data, instance methods, and static methods
  • An introduction to match statements
  • An introduction to Result types
  • An introduction to Option types
  • Creating DOM node references for Leptos components

The Lesson

There are some common events that you can probably intuitively guess. Going from pure intuition is only going to get you so far.

We frame the way we solve problems through the lense of the tools we have at hand. For this reason, it's a good idea to familiarize yourself with the HTML tags that exist and the web events that exist. The web platform has a ton of features that a lot of people don't know about because they stopped learning HTML at <div> and <p> tags.

The mozilla foundation has a wonderful website called MDN which contains invaluable reference to help expand your knowledge.

The List of web events will provide everything you need to know to respond to actions on elements which you can attach to your Leptos components.

Custom Events

You may wish to create your own custom events. Custom events can be useful when you want to differentiate a generic behaviour in the web, from a specific behaviour or event in your application.

For example:

#![allow(unused)]
fn main() {
#[component]  
fn MyLunchbox(cx: Scope) -> Element {  
    let consume_sandwich = |_|{  
      // do something in response to  
      // the sandwich being eaten.
    };  
    view!{  
        cx,  
        <Sandwich on:eat=consume_sandwich/>  
    }  
}

#[component]  
fn Sandwich(cx: Scope) -> Element {  
    let trigger_eating_event = |_|{  
      // Code that triggers the custom event
      // which will bubble up to from the 
      // botton to it's parent
    };  
    view!{  
        cx,  
        <button on:click=trigger_eating_event/>
	        I'm a snack
        </button> 
    }  
}
}

If we were to write this with standard events, we would need some more introspection and features of the platform start to leak up into higher levels of our application. This issue can be simplified as saying, knowledge of the application needs to span across the boundary of multiple components. More things to keep in your head makes programs harder to reason about, more difficult to extend/modify, and less clear to newcomers wishing to contribute to the application. Or, maybe you just came back from a vacation and forgot all about how sometihng worked. Ideally you shouldn't need to know how (which is imperative), being able to focus on what (which is declarative).

Ultimately we probably don't care if a click event triggered the sandwich to be eaten, or if it was a key press that triggered it. Maybe the button was focused and they hit the enter key.

This is my opinion but I would say that In some sense the custom event simplifies our application because we're handling the what of the event insted of the how (click, key press, etc).

In the following standard event example, MyLunchbox needs to be aware of which events might exist to bubbling up to it as clicks. The event handler needs to filter out the appropriate event, introspect the event (look inside of it), and then take the appropriate action. Imagine that our sandwich in our system has a special identifier.

We could should out, "Eat #2" which happens to be the sandwich, emitting an eat event with a payload (data assocaited with the event) that is the food's identifier in your lunchbox.

The standard event equivalent would, "I'm doing a thing with my lunchbox stuff," requiring someone to then ask, "Ok, so um... what are you doing? Are you trying to eat something? What are you trying to eat? Does it have an identifier? Can I get that identifier?"

#![allow(unused)]
fn main() {
#[component]  
fn MyLunchbox(cx: Scope) -> Element {  
    let maybe_consume_sandwich = |event|{  
      // Introspection may be required in more
      // complicated use cases to make sure the 
      // right event bubbled up to be handled
      // and that it has the correct data to be
      // able to follow through with the desired
      // application behaviour.
    };  
    view!{  
        cx,  
        <Sandwich on:click=maybe_consume_sandwich/>  
    }  
}

#[component]  
fn Sandwich(cx: Scope) -> Element {  
    view!{  
        cx,  
        <button on:click=trigger_eating_event/>
	        I'm a snack
        </button> 
    }  
}

}

Caution: Beware complexity!

It can be tempting to cut your application up into a ton of domain specific—specific to the problem you're solving with language appropriate to that problem—events, but that comes at a cost. You will lose some forms of flexibility as you add focus and specificity to your application.

In the standard event example, we do still have the ability to introspect the event when handling it in MyLunchbox. That might be really useful. If we needed some additional data with our custom event we'd need to go into the Sandwich component and include it.

And that's the thing with programming. It always depends.

I advocate for, favour simplicity and only cut things apart when they get too big to keep together. Some problems are inherently complicated because of the types of problems they are. Ideally you should be able to walk away from your program, come back, and understand what's happening. We can not rely on being in the flow state or "zone" as the required mode to understand what we wrote. I would say this is actually a liability. Besides, we should create applications that allow us to be interrupted by life without causing frustration.

Creating a custom event

Creating a custom event normally happens in JavaScript, because it's part of the browser's runtime. The code looks like this:


const event = new Event('build');

// Listen for the event.
elem.addEventListener('build', (e) => { /* … */ }, false);

// Dispatch the event.
elem.dispatchEvent(event);

https://developer.mozilla.org/en-US/docs/Web/Events/Creating_and_triggering_events

We need to do something similar in our Rust code. To do this we'll use the web_sys crate.

There is a struct in web_sys called CustomEvent Docs

Let's go over some struct basics since we're going to e using them for this lesson and going forward.

Introduction to Structs

A struct is like a class in a lot of object oriented languages. It is a category of type that has the ability to group data, functionality related to that data, and functionality related to its general idea, all around a single name. Recall that a type is the name that describes a set of possible values.

Struct data

One of the key features of structs in Rust is that they specify a grouping of data types and values which we call properties. I suspect this is why they're called structs—structured data or data structure. If we had a Bacon Lettuce and Tomato sandwich struct it's definition would look like this:

#![allow(unused)]
fn main() {
struct BLTSandwich {
	bread: TypeOfBread,
	lettuce: TypeOfLettuce,
	tomato: TypeOfTomato,
	bacon: TypeOfBacon,
	mayo: bool
}
}

The above example expects that TypeOfBread, TypeOfLettuce, TypeOfTomato, and TypeOfBacon are all defined earlier. They are used here to illustrate that BLTSandwich has constrained which values it's specific proeprties can have. You can not have a BLTSandwich with rocks as a value for bacon, because rocks are not a type of bacon! This is why type systems are important. They help prevent us from eating rocks... or... making mistakes in our programs. :)

If you have keen eyes you'll recognize something here. There's a pattern that we've seen a few times before.

#![allow(unused)]
fn main() {
// a function definition
fn function_name( parameter: type ) {}

// a struct definition
struct StructName{ property: type }
}

This pattern can be abstracted to the following:

  • Rust keyword to define context/subject (fn, struct)
  • A name to be able to use the noun(function_name, StructName)
  • Some form of encapsulation with configuration
Make a new thing from an idea (a concretion)

A struct or structure is like an idea. And ideas aren't real in a sense that we can't hold them. We have an idea of what a BLT Sandwich is, but we can't eat the idea. But we have written specification for what the BLT is in the definition of our struct.

If we were to take the idea of a BLT sandwich and make A BLT sandwich we would say that we were making a concretion. A thing that is concrete or real. In object oriented programming (OOP) we would say that we are instantiating the idea (in oop ideas are classes). We are creating an instance of it.

The syntax to create a struct includes writing the stuct's name, followed by curly braces, and a list of the property names and their values.

#![allow(unused)]
fn main() {
// Assuming that the values for these 
// properties were already defined in scope
// with statements like
// let canadian_rye = get_the_best_sandwich_bread();
BLTSandwich{  
	bread: canadian_rye,
	lettuce: romaine,
	tomato: black_krim,
	bacon: farm_smoked_apple_bacon,
}
}

Most library (crate) authors write functions associated with a struct (with the idea of it) to make a concretion. It's convention for this function to be called 'new'. Calling the function follows this syntax:

#![allow(unused)]
fn main() {
	let my_thing = SomeStruct::new();
}
Functionality associated with the idea (static methods)

Structs can have functionality associated with the name of the struct. Some would described as functions that are namespaced, meaning that they are prefixed to or expected to be understood in the context of the name (being the struct's name).

The following showcases a new function in the BLTSandwich namespace which returns a new BLTSandwich (a concretion).

#![allow(unused)]
fn main() {
impl BLTSandwich {
	pub fn new() -> BLTSandwich {
		BLTSandwich{  
			bread: canadian_rye,
			lettuce: romaine,
			tomato: black_krim,
			bacon: farm_smoked_apple_bacon
		}
	}
	pub fn name() -> String {
		"Bacon, Lettuce and Tomato Sandwich".to_string()
	}
}
}

Normally there would be parameters in the new function to accept arguments to configure the new thing being created. I skipped on that for the sake of simplicity.

We can see here that we also have a name function which returns a long form name of the sandwich as a string.

We could call this by writing:

#![allow(unused)]
fn main() {
let the_sandwich_name : String = BLTSandwich::name();
}

Again, we can see that these are functions associated with the idea and separated by two colons.

But again, look closely! A pattern emerges! We previously used a function called leptos::log!. But leptos isn't a struct, it's a crate!

Rust uses the same pattern of double colons to say, "We're setting the context to qualify which thing we're talking about". When we say BLTSandwich::name, we're telling Rust "Ok, think about BLTSandwich things... when I say name, you know what I'm talking about."

The Rust language designers have done a superb job at making these things easy to remember if you're aware that there is a pattern and design behind the decision. I can only assume that these design decisions were very deliberate.

Functionality associated with the a concretion (methods)

We can associate functionality with a specific concretion (a struct made real) which we often call methods.

If we had a mthod called calories we could call with the following Rust code:

#![allow(unused)]
fn main() {
let sandwich = BLTSandwich::new();
let calories = sandwich.calories();
}

Here we make a sandwich and call calories on it.

The context here is so tightly coupled that we use a single dot as a separator. I like to think of it as this.

  1. 4 dots — A namespace is a grouping of many things, so we use many dots.
  2. 1 dot — A value is a single thing, so we use one dot.

The neat thing about the above is that if you didn't need to use sandwich you can chain these all together:

#![allow(unused)]
fn main() {
let calories = BLTSandwich::new().total_calories();
			   ^----------------^
				This will evaluate into a 'sandwich'
				which we can call total_calories() on.
}

Methods always have a special &self parameter as the first argument to denote that they're able to make reference to itself. This is how a function has the ability to do anything with it's own data. Recall that we can not use a piece of data unless it is in scope.

#![allow(unused)]

fn main() {
// Imagine that there is some function called calories, 
// which accepts things that can be turned into a calories
// value which is a 32 bit integer. Don't worry about how this
// would work. This is just a simple example.

impl BLTSandwich {
	// imagine the other static methods or namespace 
	// function from before were still here.

	pub fn total_calories(&self) -> i32 {
		calories(self.bread) + 
		calories(self.lettuce) + 
		calories(self.tomato) + 
		calories(self.bacon)
		// Recall that this function will evaluate to the 
		// last statement in its body. That's why there's no
		// semicolon at the end of this last item.
	}
	pub fn name() -> String {
		"Bacon, Lettuce and Tomato Sandwich".to_string()
	}
}
}

Note that we're able to use the value of the struct's properties with .bread. If we look at the function total_calories it starts to look really similar to .bread. with the exception of us adding parenthesis at the end to call the fucntion. Yet another pattern emerges, methods on a value are properties on the value that you can call!

#![allow(unused)]
fn main() {
let sandwich = BLTSandwich::new();
sandwich.bread;
sandwich.total_calories; //<- but then we add () to call it
}

Using web_sys::CustomEvent

We're well positioned to use the web_sys crate's CustomEvent struct.

If we zip over to the documentation we can see that there is a new method on the struct Docs:

#![allow(unused)]
fn main() {
web_sys::CustomEvent::new("my-custom-event");
}

But there's a notice under the definition of the new method that states the following:

This API requires the following crate features to be activated: CustomEvent

I did a quick search for "web_sys enable feature" which lead me to this support doc Enable the cargo features for the APIs you're using.

Leptos includes these web_sys features for you as part of its library.

If we go back to the new method's definition in the web_sys::CustomEvent docs we'll see the following definition:

#![allow(unused)]
fn main() {
pub fn new(type_: &str) -> Result<CustomEvent, JsValue>
}

Notice that it returns after the -> a Result type, which has some type arguments (generics). The first one refers to what we get if new is run and the result is Ok, the second is the result that we get if new runs and the result is an Error. We can handle these with some in build pattern matching which we'll go into more later.

Our component code now looks like this:

#![allow(unused)]
fn main() {
#[component]  
fn MyComponent(cx: Scope) -> Element {  
    let trigger_sending_of_custom_event = |_|{  
        match web_sys::CustomEvent::new("my-custom-event") {  
            Ok(event) => {  
                // We have an event that we can send  
            },  
            Err(_) => {  
                // There as an error in creating the event  
                // We're not doing anything with this for now
				// so we'll use an '_' to destructure it's error
				// message            
			}  
        }  
    };  
    view!{  
        cx,  
        <div>  
            <button on:click=trigger_sending_of_custom_event>  
                "Trigger custom event"  
            </button>  
        </div>  
    }  
}
}

The match keyword requires that we create branches/arms for each possible option. Recall that we talked about types as restrictions that describe possible values. A result is an enumeration (a list of possible values or strict set of options) which can be one of two values. It can be Ok or Err. The options are called variants.

In both of those cases there is a value that we can destructure out of the variants. Their types are listed as the the first and second type arguments in the returned type's signature. Result<CustomEvent, JsValue> means that we'll have an Ok( CustomEvent ) or an Err(JsValue).

I know from the JavaScript custom event documentation that it's not enough to create the event. We need to emit it. This is called dispatching. The javascript looks like this.

elem.dispatchEvent(event);

What we need is some way to refer to our <MyComponent /> so that we can dispatch the event on it. We need a reference to it.

Getting a reference to self as a DOM node with NodeRef

Leptos provides us with the ability to get a reference to the DOM node created by its view! template. Think of it like a direct line to its DOM counterpoint.

The first step is to create the nodeRef, and add it as a special _ref property to the parent/root element in the view! template. Recall that the Leptos component is proxy for the view! template's root element. Putting the reference on this div is the same as putting the reference on <MyComponent />.

#![allow(unused)]
fn main() {
#[component]  
fn MyComponent(cx: Scope) -> Element {  
    let dom_node_ref = NodeRef::new(cx);
    // abbreviated/folded Rust code here for space saving
    view!{  
	    cx,  
	    <div _ref=dom_node_ref>  
	        <button on:click=trigger_sending_of_custom_event>  
	            "Trigger custom event"  
	        </button>  
	    </div>  
	}
}
}

The dom_node_ref uses signals under the hood so we can move it into our handler closure without stressing about move semantics. We'll add the move keyword to the closure and we'll add some more matching if we are able to make our custom event.

#![allow(unused)]
fn main() {
match dom_node_ref.get() {  
	None => {  
		// None will only happen if this component isn't  
		// mounted to the DOM, but it has to be in order                   
		// for the click event to fire, so we can ignore this                    
	}  
	Some(dom_element) => {  
		// Emit/dispatch our custom event  
	}  
}  
}

We call 'get' on the dom_node_ref to get the actual DOM element in Rust form. There are cases when the DOM element/node might not exist. Rust requires us to account for all possibilities, which is why the get method returns a option type. It's return type definition is Option<web_sys::Element>. Option is an enum which can be None or Some with the type argument provided in it's signature. In this case it's of the web_sys::Element type. We're destructing it and giving it the label dom_element

#![allow(unused)]
fn main() {
#[component]  
fn MyComponent(cx: Scope) -> Element {  
    let dom_node_ref = NodeRef::new(cx);  
  
    let trigger_sending_of_custom_event = move |_|{  
        match web_sys::CustomEvent::new("my-custom-event") {  
            Ok(event) => {  
                match dom_node_ref.get() {  
                    None => {  
                        // None will only happen if this component isn't  
		                // mounted to the DOM, but it has to be in order                   
			            // for the click event to fire, so we can ignore this                    
		            }  
                    Some(dom_element) => {  
                        // Emit/dispatch our custom event  
                    }  
                }  
            },  
            Err(_) => {}  
        }  
    };  
    view!{  
        cx,  
        <div _ref=dom_node_ref>  
            <button on:click=trigger_sending_of_custom_event>  
                "Trigger custom event"  
            </button>  
        </div>  
    }  
}
}

Intuitively, we'll probably want to try something like this for the actual event sending. This is a focused view of the happy path match arm:

#![allow(unused)]
fn main() {
match dom_node_ref.get() {  
    None => {}  
    Some(dom_element) => {  
        dom_element.dispatch_event(event);  
    }  
}
}

Unfortuantely this doesn't work. Rust tells us that dispatch_event is expecting a &event, a reference to an event. Let's add an ampersand before event to send a reference.

Rust's compiler may complain about unhandled results from the event dispatch. We can add another match statement to handle those.

#![allow(unused)]
fn main() {
match dom_element.dispatch_event(&event) {  
    Ok(_) => { 
	    leptos::log!("Custom event sent") 
	},  
    Err(_) => { 
	    leptos::log!("Failed to send") 
	}  
}
}

We can now listen to our custom event from our Leptos component:

#![allow(unused)]
fn main() {
#[component]  
fn RadApp(cx: Scope) -> Element {  
    let log_response = |_| {  
        leptos::log!("Our custom event happened")  
    };  
    view! {  
        cx,  
        <MyComponent on:myCustomEvent=log_response/>  
    }  
}
}

Note that event names are camelCased

We're still not totally there yet though. We need to actually tell this new custom event to bubble.

#![allow(unused)]

fn main() {
//We need to create a config that is mutable (so we add 'mut' after let)
let mut event_config = web_sys::CustomEventInit::new();  

// We set the bubble property to true
event_config.bubbles(true);  

// We create a new event with the special config using a different constructor method

let event = web_sys::CustomEvent::new_with_event_init_dict(
	"myCustomEvent", 
	&event_config
);

// was previous
// let event = web_sys::CustomEvent::new("myCustomEvent"); 
}

And just like that we have custom events on components with references!

The Complete Code

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <RadApp />  
        }  
    })  
}  
  
#[component]  
fn RadApp(cx: Scope) -> Element {  
    let log_response = |_| {  
        leptos::log!("Our custom event happened")  
    };  
    view! {  
        cx,  
        <MyComponent on:myCustomEvent=log_response />  
    }  
}  
  
#[component]  
fn MyComponent(cx: Scope) -> Element {  
    let dom_node_ref = NodeRef::new(cx);  
  
    let trigger_sending_of_custom_event = move |_| {  
  
        let mut event_config = web_sys::CustomEventInit::new();  
        event_config.bubbles(true);  
        let event = web_sys::CustomEvent::new_with_event_init_dict("myCustomEvent", &event_config);  
  
        match event {  
             Ok(event) => {  
                 match dom_node_ref.get() {  
                    None => {}  
                    Some(dom_element) => {  
                        match dom_element.dispatch_event(&event) {  
                            Ok(_) => { leptos::log!("Custom event sent") },  
                            Err(_) => { leptos::log!("Failed to send") }  
                        }  
                    }  
                }  
            }  
            Err(_) => {}  
        }  
    };  
    view! {  
        cx,  
        <div _ref=dom_node_ref>  
            <button on:click=trigger_sending_of_custom_event>  
                "Trigger custom event"  
            </button>  
        </div>  
    }  
}

Custom Event Data

What we know

  • Custom events can be dispatched and bubbled up to handle events at different levels of your applications.
  • Custom events allow us to convert imperative events based on DOM interaction into domain specific events that are more declarative.

What we'll learn

  • A deeper look in to declarative or domain specific code
  • Why and how to add data with custom events

Caveat

  • Custom events with data isn't the most efficient way to send data around Leptos. There is a performance toll to be paid any time data crosses the WASM boundary. This lesson is really about showing you how to do JavaScript like things in Leptos/Rust. With that said, there are more efficient ways to send data around Leptos, but at the cost of JavaScript interoperatibilty, which we'll investigate in later lessons.

The Lesson

We introduced custom events in a previous lesson, with the following code:

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <RadApp />  
        }  
    })  
}  
  
#[component]  
fn RadApp(cx: Scope) -> Element {  
    let log_response = |_| {  
        leptos::log!("Our custom event happened")  
    };  
    view! {  
        cx,  
        <MyComponent on:myCustomEvent=log_response />  
    }  
}  
  
#[component]  
fn MyComponent(cx: Scope) -> Element {  
    let dom_node_ref = NodeRef::new(cx);  
  
    let trigger_sending_of_custom_event = move |_| {  
  
        let mut event_config = web_sys::CustomEventInit::new();  
        event_config.bubbles(true);  
        let event = web_sys::CustomEvent::new_with_event_init_dict(
	        "myCustomEvent", 
	        &event_config
	    );  
  
        match event {  
             Ok(event) => {  
                 match dom_node_ref.get() {  
                    None => {}  
                    Some(dom_element) => {  
                        match dom_element.dispatch_event(&event) {  
                            Ok(_) => { leptos::log!("Custom event sent") },  
                            Err(_) => { leptos::log!("Failed to send") }  
                        }  
                    }  
                }  
            }  
            Err(_) => {}  
        }  
    };  
    view! {  
        cx,  
        <div _ref=dom_node_ref>  
            <button on:click=trigger_sending_of_custom_event>  
                "Trigger custom event"  
            </button>  
        </div>  
    }  
}

In this code we have a Leptos component which contains a view! tempate with a div and a button. This button has a handler which is a closure passed to a click event via the on:click property (prop).

Leptos has a special private property for elements in view! templates. It's called _ref and it's used to allow Leptos to refer to HTML elements (by refering to them) in the client side runtime (in the browser). We create a reference in a given scope/context, and apply it as the value of the property _ref.

Recall that the root element of a view! template is interchangable with it's component tag. By placing a reference on the root <div> we're actually creating a reference to <MyComponent>.

The reference is used in the event handler so that our custom event appears to be dispatched from our Leptos component, allowing us to add a handler with <MyComponent on:myCustomEvent=... />.

Platform specific to domain specific

Systems and applications are full of complex mechanism. They contain behaviours that are described in code that reveal how the platform was designed and implemented.

Systems and applications are also full of domain specific complexity. They contain behaviours relating to the "business logic" or description of how the application solves a problem. How these problems are solved often describe activities relating to the problem, and not specifically relating to the technology that it runs on.

Or example, let's think about building a sandwich shop ecommerce application. And let's say we're clicking on a "buy sandwich button". That button would have an on click event to add a sandwich to your cart. The idea of clicking and dispatching an event when something is clicked doesn't actually have anything to do with buying sandwiches. It actually has to do with the platform.

In our minds we may look at the button, the intention behind it, the text node as its label, and infering that clicking on the button should order a sandwich. This is implied and requires us to think about the intention of the application through its interaction with the platform.

If that event became an "order sandwich" event then we'd be in domain territory. It is specific to the language that describes activites and actions in the head space or domain of our problem—our sandwich shop.

Separating required knowledge of the platform from knowledge and interactions between business processes will allow you to focus on each area separately. This will allow you to change which things could trigger a sandwich order instead of introspecting and evaluating generic events.

This can make applications more flexible, robust, simple, and easier to understand.

The case for associated event data

We've outlined that it's useful to separate platform events from application events, but we have't discussed the importance of associated data with those events yet.

Let's go back to our online sandwich shop as an example case. If have one sandwich, we're good. We can dispatch a custom event called "orderSandwich" and let that be that. But what do we do if we have more than one sandwich type? We'll need some way to know which sandwich we're ordering.

One solution could be to create one event per sandwich type. Perhaps we have orderRubinSandwich or orderBLTSandwich. This could be a completely valid solution if we only had a few sandwiches. Where things get tricky is when we start to think about configurations of sandwiches. We'll end with a combinatorial explosion of event types to match each sandwich configuration.

Our Bacon Lettuce and Tomato sandwich, with the ability to select different breads, leafy greens, or tomato types, then we'd quickly end up with too many variants of event to manage.

This is a situation where we'd like our system to dispatch an event called orderSandwich, with data associated to configures what the sandwich is. It would be idea if we could send an orderBLTSandwich event where we specify which bread, leafy greens, or type of tomato are requeted by the customer.

Beware, you may feel the urge to continue abstracting. It's not uncommon to think, "Well, what if we want to sell different things at our sandwich shop? Why don't we just have orderItem as an event type and the configure of that item can include the item type, being a sandwich. If we ordered a drink, then drink would be the item type, and so forth." One could say that this is a premature generalization. The more general a system becomes the less its components express the function of the application. Moving from specific to generic actually adds some complexity in that you need to apply a case to think about the generalization. Try to start with the concrete, known, specific, and within the domain. Then refactor and generalize as needed as the application grows. There are no hard an fast rules for when to do this, just be aware that you do not need to hyper generalize your solutions at the start. Write what you mean, be clear, and you'll thank yourself later when you have to go in and edit things a month or year from now. :)

Getting the configuration

For the most simple example we can actually hard code the configuration into the event sender. We don't need to pull it out any HTML data attributes or input fields.

Our template could look like this:

#![allow(unused)]
fn main() {
 view! {  
    cx,  
    <div _ref=dom_node_ref>  
     <h3>"BLT Sandwich"</h3>  
        <button  
         on:click=trigger_order_sandwich_event  
         >  
            "Order Sandwich"  
        </button>  
    </div>  
}
}

We need to update the event handler so that our new custom event is sent with this extra data. Web events have a property on their JavaScript object called detail, which we can use to story and carry arbitrary data.

After we initialize the event_config, we modify it so that it bubbles up (so that ancestors can respond to the event), and then we'll do another modification to add the detail data. Recall that if we're changing a piece of data we need to write mut before the name of it to specify that it can be changed, that it can be MUTated.

#![allow(unused)]
fn main() {
	let mut event_config = web_sys::CustomEventInit::new();  
    event_config.bubbles(true);  
    event_config.detail(&data);
}

But now you're probably wondering, what is &data. We're providing the detail method on event_config with a reference (denoted with the & ) to data. The detail method accepts any JsValue. In our simple example, we're only going to specify bread type.

#![allow(unused)]
fn main() {
let bread_type = JsValue::from("Canadian Rye");  
event_config.detail( &bread_type );
}

We are calling the from static method on the JsValue struct to create a new JsValue from our string slice "Canadian Rye". We're then using the bread_type as an argument for the detail method, but we're passing the data as a reference, denoted with the &; If you forget the ampersand the Rust compiler will actually make the recommendation for you to include it so that your usage matches the detail() method's definition.

To use JsValue we need to bring it into scope. At the top of your main.rs file, add

#![allow(unused)]
fn main() {
use crate::wasm_bindgen::JsValue;
}

Our whole BLT component looks like this:

#![allow(unused)]
fn main() {
#[component]  
fn BltSandwich(cx: Scope) -> Element {  
    let dom_node_ref = NodeRef::new(cx);  
  
    let trigger_order_sandwich_event = move |event| {  
  
        let mut event_config = web_sys::CustomEventInit::new();  
        event_config.bubbles(true);  
        let bread_type = JsValue::from("Canadian Rye");  
        event_config.detail( &bread_type );  
        let event = web_sys::CustomEvent::new_with_event_init_dict(  
            "orderSandwich",  
            &event_config  
        );  
  
        match event {  
            Ok(event) => {  
                match dom_node_ref.get() {  
                    None => {}  
                    Some(dom_element) => {  
                        match dom_element.dispatch_event(&event) {  
                            Ok(_) => { leptos::log!("Custom event sent") },  
                            Err(_) => { leptos::log!("Failed to send") }  
                        }  
                    }  
                }  
            }  
            Err(_) => {}  
        }  
    };  
    view! {  
        cx,  
        <div _ref=dom_node_ref>  
           <h3>"BLT Sandwich"</h3>  
            <button on:click=trigger_order_sandwich_event>  
                "Order Sandwich"  
            </button>  
        </div>  
    }  
}
}

Now let's look at the top part of our app with our mount_to_body and top level app component:

fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <SandwichShopApp />  
        }  
    })  
}  
  
#[component]  
fn SandwichShopApp(cx: Scope) -> Element {  
    let log_order = |_| {  
        leptos::log!("Our custom event happened");  
    };  
    view! {  
        cx,  
        <BltSandwich on:orderSandwich=log_order />  
    }  
}

Let's focus in on the orderSandwich event handler:

#![allow(unused)]
fn main() {
let log_order = |_| {  
	leptos::log!("Our custom event happened");  
};  
}

Note that before we had a underscore for the event parameter of the handler. We had no need of the event in the context of our closure's body (in-between the curley braces) so we wrote an underscore to tell Rust that we're not using. This is a Rust convention.

Now we need the event but we don't know what type it is. We can let Rust do the work for us. Put any type in there and run trunk serve if you're not already.

#![allow(unused)]
fn main() {
let log_order = |event: i32| {  
	leptos::log!("Our custom event happened");  
};  
}

The compiler will check for you and tell you about the mismatch.

expected closure signature `fn(Event) -> _`
   found closure signature `fn(i32) -> _`

This tells us that it should be an Event type, not i32. :D The compiler is so helpful.

#![allow(unused)]
fn main() {
let log_order = |event: Event| {  
	leptos::log!("Our custom event happened");  
};  
}

If we just write the above the compiler will also tell us that "Event" doesn't exist in our scope. It's telling us we need to be more specific about what we mean. Then it outlines ways that we can bring the definition of events into our scope.

help: consider importing one of these items
   |
1  | use crate::web_sys::Event;
   |
1  | use web_sys::Event;
   |

Use web_sys::Event would force all Event types to be web_sys::Event types. Think of it like we're importing the type. We can also just manually write the type with the namespace in our closure. I prefer to include the crate or module as context for clarity.

#![allow(unused)]
fn main() {
let log_order = |event : web_sys::Event| {
}

...feels more clear than...

#![allow(unused)]
fn main() {
let log_order = |event : Event| {
}

Shorter code isn't always better code. Aim to be clear and to avoid ambiguity.

Now, unfortunately there is no detail() method on a web_sys::Event. But a web_sys::Event is a JsValue and we can turn it into a custom event:

#![allow(unused)]
fn main() {
let custom_event = event.unchecked_into::<web_sys::CustomEvent>();
}

Here we're calling the unchecked_into method on the event and using the turbo fish ::<> syntax to provide the destination type argument, which is a web_sys::CustomEvent.

It should be noted that this is a unique behaviour to working with things that are JsValue types at their core. Rust doesn't normally work this way and you can not just smash one type into another type with this ease. JavaScript is not a typed language. When we work with JsValues we're often taking the raw data from JavaScript and pushing it into a Rust context where we enforce type safety from there forward. This is how we can call unchecked_into to convert the regular event to the custom event, granting us access to the .detail() method.

Our app event handler now looks like this:

#![allow(unused)]
fn main() {
let log_order = |event : web_sys::Event| {  
    let custom_event = event.unchecked_into::<web_sys::CustomEvent>();  
    let sandwich_type = custom_event.detail();  
    leptos::log!("Our custom event happened");  
    leptos::log!("{:?}", sandwich_type );  
};
}

You'll note that when we log the value of sandwich_type, the console in your browser will say JsValue("Canadian Rye"). The value we pulled out of detail() is a JsValue and needs to be converted into a rust type to be used elsewhere in your system.

We can use a special method on JsValue values called as_string(), but it returns an Option type which we can handle with our match statements.

#![allow(unused)]
fn main() {
let log_order = |event : web_sys::Event| {  
  
	leptos::log!("Our custom event happened");  
    
    let custom_event = event.unchecked_into::<web_sys::CustomEvent>();  
  
    let bread_type_js = custom_event.detail();  
    let opt_bread_type_rs = bread_type_js.as_string();  
    
    match opt_bread_type_rs {  
        Some(bread_type) => { leptos::log!("{:?}", bread_type ) },  
        None => {}  
    }  
};
}

We can reduce assignments here by chaining all of these together.

#![allow(unused)]
fn main() {
let log_order = |event : web_sys::Event| {  
    leptos::log!("Our custom event happened");  
    let bread_type = event  
        .unchecked_into::<web_sys::CustomEvent>()  
        .detail()  
        .as_string()  
        .unwrap_or(String::new());  
  
    leptos::log!("{:?}", bread_type );  
};
}

The new method here is unwrap_or, which takes the Some value or uses a default value (provided as an argument) if none.

The whole thing together looks like this:

use leptos::*;  
use crate::wasm_bindgen::JsValue;  
  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <SandwichShopApp />  
        }  
    })  
}  
  
#[component]  
fn SandwichShopApp(cx: Scope) -> Element {  
    let log_order = |event : web_sys::Event| {  
        leptos::log!("Our custom event happened");  
  
        let bread_type = event  
            .unchecked_into::<web_sys::CustomEvent>()  
            .detail()  
            .as_string()  
            .unwrap_or(String::new());  
  
        leptos::log!("{:?}", bread_type );  
    };  
    view! {  
        cx,  
        <BltSandwich on:orderSandwich=log_order />  
    }  
}  
  
#[component]  
fn BltSandwich(cx: Scope) -> Element {  
    let dom_node_ref = NodeRef::new(cx);  
  
    let trigger_order_sandwich_event = move |event| {  
  
        let mut event_config = web_sys::CustomEventInit::new();  
        event_config.bubbles(true);  
        let bread_type = JsValue::from("Canadian Rye");  
        event_config.detail( &bread_type );  
        let event = web_sys::CustomEvent::new_with_event_init_dict(  
            "orderSandwich",  
            &event_config  
        );  
  
        match event {  
            Ok(event) => {  
                match dom_node_ref.get() {  
                    None => {}  
                    Some(dom_element) => {  
                        match dom_element.dispatch_event(&event) {  
                            Ok(_) => { leptos::log!("Custom event sent") },  
                            Err(_) => { leptos::log!("Failed to send") }  
                        }  
                    }  
                }  
            }  
            Err(_) => {}  
        }  
    };  
    view! {  
        cx,  
        <div _ref=dom_node_ref>  
           <h3>"BLT Sandwich"</h3>  
            <button on:click=trigger_order_sandwich_event>  
                "Order Sandwich"  
            </button>  
        </div>  
    }  
}

Custom Event Module

What we know

  • Data can be assocaited with custom events
  • There's a lot of boiler plate in with custom events

What we'll learn

  • How to turn our code into a module that we can reuse

The lesson

Module basics

Rust has the ability to create modules to encapsulate code. It allows you to expose parts of the code to the outside world, while keeping other parts of the code private to the module.

A module is defined by using the key word mod followed by the name of the module and curley braces which encapsulate the code in a module.

#![allow(unused)]
fn main() {
mod my_module {  
	pub fn hello_world() {
		println!("Hi");
	}
	fn you_cant_call_me() {
		println!("Seeecrets");
	}
}
}

We've seen this pattern of set the context then noun the content/definition all over the place. These patterns repeat all over the place.

The module can be used in the scope in which it is defined without any extra work. We must prefix functions with 'pub' in a module to specify that they are public. Functions in a module have access to private functions that are within the module because they're all in the same module scope. Calling a function inside a module requires you to specify the module's namespace followed by two colons and the function name.

mod my_module {  
	pub fn hello_world() {
		println!("Hi");
	}
	fn you_cant_call_me() {
		println!("Seeecrets");
	}
}

fn main() {  
    // ✅ We can call this public function
    my_module::hello_world();
    
    // ❌ We can't call this private function
    my_module::you_cant_call_me();
}

Module files

Modules can be moved to their own files as well.

  1. Create a my_module.rs file in the ./src folder fo your application
#![allow(unused)]
fn main() {
pub fn hello_world() {
	println!("Hi");
}
fn you_cant_call_me() {
	println!("Seeecrets");
}
}
  1. Bring it into scope in your ./src/main.rs file with mod my_module which will automatically hook up the file my_module.rs
mod my_module;

fn main() {  
    my_module::hello_world();
}

Now, while you can do this, it's not ideal.

lib.rs

The preferred orgnization is to create a lib.rs file, which is the entry point to your crate's functionality. It is called lib because it is a library of functionality and isn't intended to be directly executed. We'll deal with the details of this later. For this lesson we're going to create the module in the same main.rs file as your example application.

The refactor

We started off with the following from a previous lesson:

use leptos::*;  
use crate::wasm_bindgen::JsValue;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <SandwichShopApp />  
        }  
    })  
}  
  
#[component]  
fn SandwichShopApp(cx: Scope) -> Element {  
    let log_order = |event : web_sys::Event| {  
        leptos::log!("Our custom event happened");  
  
        let bread_type = event  
            .unchecked_into::<web_sys::CustomEvent>()  
            .detail()  
            .as_string()  
            .unwrap_or(String::new());  
  
        leptos::log!("{:?}", bread_type );  
    };  
    view! {  
        cx,  
        <BltSandwich on:orderSandwich=log_order />  
    }  
}  
  
#[component]  
fn BltSandwich(cx: Scope) -> Element {  
    let dom_node_ref = NodeRef::new(cx);  
  
    let trigger_order_sandwich_event = move |event| {  
  
        let mut event_config = web_sys::CustomEventInit::new();  
        event_config.bubbles(true);  
        let bread_type = JsValue::from("Canadian Rye");  
        event_config.detail( &bread_type );  
        
        let event = web_sys::CustomEvent::new_with_event_init_dict(  
            "orderSandwich",  
            &event_config  
        );  
  
        match event {  
            Ok(event) => {  
                match dom_node_ref.get() {  
                    None => {}  
                    Some(dom_element) => {  
                        match dom_element.dispatch_event(&event) {  
                            Ok(_) => { leptos::log!("Custom event sent") },  
                            Err(_) => { leptos::log!("Failed to send") }  
                        }  
                    }  
                }  
            }  
            Err(_) => {}  
        }  
        
    };  
    view! {  
        cx,  
        <div _ref=dom_node_ref>  
           <h3>"BLT Sandwich"</h3>  
            <button on:click=trigger_order_sandwich_event>  
                "Order Sandwich"  
            </button>  
        </div>  
    }  
}

We'll start by making a module.

#![allow(unused)]
fn main() {
mod component_custom_event {
	
}
}

For the time being we'll remove the complexity of dealing with the event data, omitting the following lines:

#![allow(unused)]
fn main() {
let bread_type = JsValue::from("Canadian Rye");  
event_config.detail( &bread_type );  
}

How it's time to start moving things into the module and generalizing them or making them configurable.

I could see myself annotating the following code with a comment like //first create a custom event.

#![allow(unused)]
fn main() {
	let mut event_config = web_sys::CustomEventInit::new();  
	event_config.bubbles(true);  
	
	let event = web_sys::CustomEvent::new_with_event_init_dict(  
		"orderSandwich",  
		&event_config  
	);  
  
}

This immediately tells me that there's a name I can give to these lines that summarizes them. We're creating a new custom event. We do need to give this new event a name, which instantly makes me think, "Name is a parameter!" We also know that new_with_event_init_dict() returns a Result type, which we handled before with our match statement.

Let's start by stubbing out the definition:

#![allow(unused)]
fn main() {
mod component_custom_event {
	fn new(name: &str) -> Result<web_sys::CustomEvent, JsValue> {  
		// do stuff
	}
}

If this worked we could use component_custom_event::new("orderSandwich") and we should get what we expect to continue in our event handler.

A module is a separate scope. It acts in a similar way to main.rs, which has it's own scope. Rust is very good at being congruent like that. web_sys and JsValue aren't defined in the module. To fix this we'll add some use statements.

#![allow(unused)]
fn main() {
use leptos::*;  //web_sys is imported as part of leptos's prelude
use leptos::wasm_bindgen::JsValue;
}

Let's copy the code block in as the body:

#![allow(unused)]
fn main() {
mod component_custom_event {
	use leptos::*;  
	use leptos::wasm_bindgen::JsValue;

	fn new(name: &str) -> Result<web_sys::CustomEvent, JsValue> {  
	    let mut event_config = web_sys::CustomEventInit::new();  
	    event_config.bubbles(true);  
	    let event = web_sys::CustomEvent::new_with_event_init_dict(  
			"orderSandwich",  
			&event_config  
		);  
	}
}
}

And, we need to hook up our property so that it's arguments are used in the events configuration. To do this we need to replace the literal "orderSandwich" with name. Now the value of the name function parameter will be used as the event's name, passed as the first argument to new_with_event_init_dict().

#![allow(unused)]
fn main() {
mod component_custom_event {
	
	fn new(name: &str) -> Result<web_sys::CustomEvent, JsValue> {  
	    let mut event_config = web_sys::CustomEventInit::new();  
	    event_config.bubbles(true);  
	    let event = web_sys::CustomEvent::new_with_event_init_dict(  
			name, 
			&event_config  
		);  
	}
}
}

But we're not quite done here. We have an assignment for the last expression with let event =. And the last expression has a semicolon ; at the end. This would result in the new function returning a unit type, written as (). If we want to return the event we could write event at the end, without a semicolon, so that it would be the "last word" in the function. Recall that Rust is expression based and the last open expression is used as the return of functions and scope blocks (unless you write return and provide it something to explicitly return). Let's remove the assignment and semicolon, and we're done with this one.

#![allow(unused)]
fn main() {
mod component_custom_event {
	
	fn new(name: &str) -> Result<web_sys::CustomEvent, JsValue> {  
		// configuration
	    let mut event_config = web_sys::CustomEventInit::new();  
	    event_config.bubbles(true);  
		// generation
	    web_sys::CustomEvent::new_with_event_init_dict(  
			name, 
			&event_config  
		)
	}
}
}

You might be tempted to try to do some form of chaining or nesting to make this even smaller. But CustomEventInit::new() returns a value that we need to mutate. It's the most clear to separate out the configuration stage from the custom event generation stage.

So now, my custom event dispatcher/handler looks like this:

#![allow(unused)]
fn main() {
let trigger_order_sandwich_event = move |event| {  
		
		component_custom_event::new("orderSandwich");
  
        match event {  
            Ok(event) => {  
                match dom_node_ref.get() {  
                    None => {}  
                    Some(dom_element) => {  
                        match dom_element.dispatch_event(&event) {  
                            Ok(_) => { leptos::log!("Custom event sent") },  
                            Err(_) => { leptos::log!("Failed to send") }  
                        }  
                    }  
                }  
            }  
            Err(_) => {}  
        }  
        
    };  
}

I'm looking at this and that whole match event block really looks like it summarizes as "send event". In fact, it feels like that's what I'm doing with this whole thing. I'm just dispatching a custom event on a specific node, through it's reference.

Maybe what I'm looking for is something like this:

#![allow(unused)]
fn main() {
let trigger_order_sandwich_event = move |event| {  
	component_custom_event::dispatch("orderSandwich", dom_node_ref);
}
}

Yeah, that's starting to look good! That says what I want to happen.

Now let's write the how. We'l start with defining a new function in the module. The name is going to pass right through, and we'll accept a NodeRef as a parameter. We know this because NodeRef::new(cx) returns a NodeRef type. If you got this wrong, Rust will actually inform you, "Oh, you tried to use a NodeRef where your dispatch method was expecting a (whatever type you used)." Our return type will be the same return type as EventTarget.dispatch_event(). We'll try to not deviate from the interface used in the standard methods. This will help us use these interchangably in the future.

#![allow(unused)]
fn main() {
// in mod component_custom_event {
pub fn dispatch( name: &str, target_ref : NodeRef) -> Result<bool, JsValue>{

}
}

Now we need to create our new event from the name, and we need to send the event with our node ref. Our node ref needs to be converted into a target because we can only call dispatch_event methods on EventTarget type values.

And so we create a new event:

#![allow(unused)]
fn main() {
let event = new(name);  
}

We can just write new because new is defined in the local module scope! This is a great example of why modules are so convenient. They're like structs that have no data and only have class methods.

And we create our target from the reference:

#![allow(unused)]
fn main() {
let target = target_ref.get();  
}

Now, here's a really cool part. event is a Result and target is an Option. We know this because we defined the return type for the new() function. We can look up the return type of NodeRef.get(). We only want to dispatch the event if our event is valid and we have a target to send it on. Rust allows you to create tuples (groups of values where their type is known at specific locations) which we can use in matchs statements. They're like super powered pattern matching if statements.

We can create a match for a tuple with event and target to do something if both are Ok() and Some() respectively!

Take a look at this.

#![allow(unused)]
fn main() {
match (event, target) {  
	// We are matching on the Result and Option enums
	// and we're destructuring, all in one step!
	( Ok(event), Some(target) ) => target.dispatch_event(&event),  
	// The underscore indicates any other option that didn't match.
	// You can think of it as any other possible value that is a 
	// valid value within the type (recall that types jsut define 
	// the bounds of valid values)
	(_,_) => Err(JsValue::null())  
}  
}

Out whole method is finished!

#![allow(unused)]
fn main() {
pub fn dispatch( name: &str, target_ref : NodeRef) -> Result<bool, JsValue>{  
  
    let event = new(name);  
    let target = target_ref.get();  
  
    match (event, target) {  
        ( Ok(event), Some(target) ) => target.dispatch_event(&event),
        (_,_) => Err(JsValue::null())  
    }  
}
}

A new things to note here is that match is the last statement in the dispatch method. The result of will be used as the return value. In our match arms, we don't include semiconons because we want those match arms to become the evaluated value of the match statement, which becomes the evaluated value of the dispatch method. It sounds complicated at first, but if you take your time to read it carefully it'll click and the beauty of this will shine through.

There are also a few interesting Rust syntax things here that might have you scratching your head.

  1. We use event in match (event,target) but then we also use event in the match arm's destructuring statement ( Ok(event), Some(target) ) and we use &event in the match arm's body. We can do this because we're actually reassigning event to a different value as we go along. This is called variable shadowing. We can use event to evaluate the match arms. When evaluating the body of the match arm, Rust will destructure and assign the values stored in the Ok() and Some() enums to their event and target names respectively.
  2. We've removed the curley brances from the branch arms. Rust allows us to drop curley brances for match arms if the contents of an arm's body is a single statement. It just helps keep visual clutter down.

And like that, we're done.

If we look at our whole component, it's very easy to see the behaviour. We're able to focus on what is happening and not how it's happening. This is the power of declarative code. It allows our mind to think at one level of detail.

use leptos::*;  
  
mod component_custom_event {  
    use leptos::web_sys;  
    use leptos::wasm_bindgen::JsValue;  
  
    fn new(name: &str) -> Result<web_sys::CustomEvent, JsValue> {  
        // configuration  
        let mut event_config = web_sys::CustomEventInit::new();  
        event_config.bubbles(true);  
        // generation  
        web_sys::CustomEvent::new_with_event_init_dict(  
            name,  
            &event_config,  
        )  
    }  
  
    pub fn dispatch( name: &str, target_ref : NodeRef) -> Result<bool, JsValue>{  
  
        let event = new(name);  
        let target = target_ref.get();  
  
        match (event, target) {  
            (Ok(event), Some(target)) => {  
                target.dispatch_event(&event)  
            },  
            (_,_) => Err(JsValue::null())  
        }  
    }  
}  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <SandwichShopApp />  
        }  
    })  
}  
  
#[component]  
fn SandwichShopApp(cx: Scope) -> Element {  
    let log_order = |event: web_sys::Event| {  
        leptos::log!("Our custom event happened");  
    };  
    view! {  
        cx,  
        <BltSandwich on:orderSandwich=log_order />  
    }  
}  
  
#[component]  
fn BltSandwich(cx: Scope) -> Element {  
    let dom_node_ref = NodeRef::new(cx);  
  
    let trigger_order_sandwich_event = move |event| {  
        component_custom_event::dispatch("orderSandwich", dom_node_ref);  
    };  
    view! {  
        cx,  
        <div _ref=dom_node_ref>  
           <h3>"BLT Sandwich"</h3>  
            <button on:click=trigger_order_sandwich_event>  
                "Order Sandwich"  
            </button>  
        </div>  
    }  
}

Custom event module with data

What we know

  • Data can be assocaited with custom events
  • We can hide the complexity of creating custom events behind an easy to use function inside a module.

What we'll learn

  • How to add data to our module.

The lesson

In a previous lesson we created a module that made dispatching a custom event super easy. We discussed adding data to events before and how useful it is, but we can't add data to be disptched with the events in our custom event module. We should fix that!

Our simple module looks like this:

#![allow(unused)]
fn main() {
mod component_custom_event {  
    use leptos::web_sys;  
    use leptos::wasm_bindgen::JsValue;  
  
    fn new(name: &str) -> Result<web_sys::CustomEvent, JsValue> {  
        // configuration  
        let mut event_config = web_sys::CustomEventInit::new();  
        event_config.bubbles(true);  
        // generation  
        web_sys::CustomEvent::new_with_event_init_dict(  
            name,  
            &event_config,  
        )  
    }  
  
    pub fn dispatch( name: &str, target_ref : NodeRef) 
    -> Result<bool, JsValue>{  
  
        let event = new(name);  
        let target = target_ref.get();  
  
        match (event, target) {  
            (Ok(event), Some(target)) => {  
                target.dispatch_event(&event)  
            },  
            (_,_) => Err(JsValue::null())  
        }  
    }  
}  
}

Adding JsValue data to the event

I have an example from a previous lesson where I added a payload to an event through the details method on the event configuration. Let's add that back in.

The first step is adding a new parameter to the new function which allows us to get a JsValue in there.

We'll add an Option, because sometimes we might not have a payload. Alternately we could create a function called 'new_with_payload' but this is fine.

#![allow(unused)]
fn main() {
fn new(name: &str, payload: Option<JsValue>) -> Result<web_sys::CustomEvent, JsValue> 
}

Then we'll conditionally add it to the event_config if we're given Some(data) for the payload.

#![allow(unused)]
fn main() {
if let Some(data) = payload {  
	event_config.detail(&data);  
}  
}

All finished, we have this:

#![allow(unused)]
fn main() {
fn new(name: &str, payload: Option<JsValue>) -> Result<web_sys::CustomEvent, JsValue> {  
    let mut event_config = web_sys::CustomEventInit::new();  
    event_config.bubbles(true);  
    if let Some(data) = payload {  
        event_config.detail(&data);  
    }  
    web_sys::CustomEvent::new_with_event_init_dict(name, &event_config)  
}
}

Simplified sending methods (API)

There is no way to specify a default parameter value in Rust. We want our api to be simple and declarative. We don't want to force people to always provide "None" if they're not including a payload. To fix this we can make a private method called real_dispatch and then a public method to dispatch and event without and with a payload respectively, called dispatch and dispatch_with_data.

#![allow(unused)]
fn main() {
fn real_dispatch( 
	name: &str, 
	target_ref : NodeRef, 
	payload: Option<JsValue>
	) -> Result<bool, JsValue>{  
  
    let event = new(name, payload);  
    let target = target_ref.get();  
  
    match (event, target) {  
        (Ok(event), Some(target)) => {  
            target.dispatch_event(&event)  
        },        (_,_) => Err(JsValue::null())  
    }
}

pub fn dispatch( 
	name: &str, 
	target_ref : NodeRef
	) -> Result<bool, JsValue>{  
    real_dispatch( name, target_ref, None)  
}

pub fn dispatch_with_data( 
	name: &str, 
	target_ref : NodeRef, 
	data: JsValue
	) -> Result<bool, JsValue>{  
    real_dispatch( name, target_ref, Some(data))  
}
    
}

While we're at it, let's add a function to grab the value a bit more easily too:

#![allow(unused)]
fn main() {
pub fn extract_data( event: web_sys::Event) -> JsValue {  
    event
	    .unchecked_into::<web_sys::CustomEvent>()
	    .detail()  
}
}

We've seen this in a prior lesson. Were just packaging it as part of the module here.

Structured Data

Here's where things get interestng. We probably don't to just send a single value. We might want to send a few values. If we went back to our BLT example, maybe we want to send a struct of the whole BLT Sandwich config.

JavaScript requires everything to be sent as text. We need a way to convert structured data into a sequence of characters that can faithfully represent it. The process of producing this is called serialization. Converting data from a serialized representation to it's typed and structured form is called deserialization.

Currently what we have will allow us to send stuctured data but it's on the application developer to serialize data and convert it into a JsValue for use with dispatch_with_data(). I think it would be convenient to do this for them so that they don't have to think about serialization.

Let's start by adding a new function. I don't know what the type of data will be so I'm writing UNKNOWN for the sake of this example. This is not a Rust thing. It's just for you, the reader, to help you follow my thought process.

#![allow(unused)]
fn main() {
pub fn dispatch_with_data_serialized(  
    name: &str,  
    target_ref : NodeRef,  
    data: UNKNOWN
    ) -> Result<bool, JsValue>{  
	// ..
}
}

I did some searching and found a great crate called "serde" which received it's name from ser-ialize de-serialize. It turns out that there is a version of serde specifically designed to work with wasm, which is supposidly more efficient than converting structured data into JSON (JavaScript Object Notation) and it gives us a JsValue! How great is that!

I've added the dependency to cargo.toml as such:

serde-wasm-bindgen = "0.4"

And now I can author the body of the function which is actually realtively simple:

#![allow(unused)]
fn main() {
match serde_wasm_bindgen::to_value(data) {  
	Ok(data) => dispatch_with_data( name, target_ref, data),
	Err(_) => Err( JsValue::null() )  
}
}

We're matching on the result of converting a reference of our data to the JsValue, if it's ok, we destructure it and return the result of dispatch_with_data, otherwise we'll return a null JsValue as an error.

Again, recall that we're keeping the return types the same as EventTarget.dispatch().

We're not quite done though. We don't know what type to put for the data. Rust requires that we specific the type so that it can verify that we're calling the appropriate methods on it, and correctly managing memory for it.

To do this we'll revisit type generics, which are those type arguments that I talked about before. They're like parameters/variables but for types. People often use 'T' as a character for a generic 'Type' but you can actually use anything you want that isn't a reserved word. I'm going to use Data. Note that data is the parameter name and Data is the generic type name.

#![allow(unused)]
fn main() {
pub fn dispatch_with_data_serialized<Data>(  
    name: &str,  
    target_ref : NodeRef,  
    data: &Data
    ) -> Result<bool, JsValue>{  
    //....
}
}

Here we're saying, a generic will be used called Data and the property data will be whatever type Data is, as a referenced value. We're telling Rust, this type can change.

As is, Rust will complain because we're using the value of data as an argument for serde_wasm_bindgen::to_value(). Rust wants to confirm that whatever is being stored in data, accepted through the function call, can be safely passed to that serde_wasm_bindgen::to_value() function, meeting its type requirements.

Let's look at the definition of serde_wasm_bindgen::to_value() for a clue. It reads as:

#![allow(unused)]
fn main() {
pub fn to_value<T: serde::ser::Serialize + ?Sized>(value: &T) -> Result<JsValue> {
}

Translation: "Whatever you pass as the value of value must be a reference to T. T is any value whoes type implements the the serde::ser::Serialize trait and isSized.

That's it. Our Data needs to fulfill the same type requirements as serde::ser::Serialize + ?Sized The colon after 'T' indicates qualifiers for 'T'. These qualifiers are Traits. A Trait is a name that refers to a specification of behaviour/capabilities. If you've ever written object oriented code, these would be similar to interfaces.

How do we serialize data?

#![allow(unused)]
fn main() {
pub fn dispatch_with_data_serialized<Data: Serialize + ?Sized>(  
    name: &str,  
    target_ref : NodeRef,  
    data: &Data
    ) -> Result<bool, JsValue>{  

	match serde_wasm_bindgen::to_value(data) {  
        Ok(data) => dispatch_with_data( name, target_ref, data),
        Err(_) => Err( JsValue::null() )  
	}
}
}

Serde is included in Leptos and adds support for a bunch of types out of the box. We can also add serialization support for our own types with a macro. Writing #[derive(Serialize, Deserialize)] above a struct will tell Rust to write out the functionality to enable these fetures for you. You do, howver, need to import the traits Serialize and Deserialize with the following use statement:

#![allow(unused)]
fn main() {
use serde::{Serialize, Deserialize};
}

We are destructuring here in the use statement so that Serialize and Deserialize are being brought into scope from the serde crate (external module).

#![allow(unused)]
fn main() {
#[derive(Serialize, Deserialize, Debug)]  
struct BLTSandwich {  
    bread: String,  
    lettuce: String,  
    tomato: String,  
    bacon: String,  
}
}

We're also adding Debug so that we can print this struct later in the lesson with the log macro.

Now let's take a look at our BLT Sandwich component:

#![allow(unused)]
fn main() {
#[component]  
fn BltSandwich(cx: Scope) -> Element {  
    let dom_node_ref = NodeRef::new(cx);  
  
    let trigger_order_sandwich_event = move |event| {  
        component_custom_event::dispatch_with_data_serialized(  
            "orderSandwich",  
            dom_node_ref,  
            &BLTSandwich {  
                bread: "canadian_rye".to_string(),  
                lettuce: "romaine".to_string(),  
                tomato: "black_krim".to_string(),  
                bacon: "farm_smoked_apple_bacon".to_string(),  
            }        
        );  
    };  
    view! {  
        cx,  
        <div _ref=dom_node_ref>  
           <h3>"BLT Sandwich"</h3>  
            <button on:click=trigger_order_sandwich_event>  
                "Order Sandwich"  
            </button>  
        </div>  
    }
}
}

It looks like there's one last missing piece of the pizzle. We need a function that will extract our serialized data back into our struct. We'll use as similar tactic to find the type requirements. The generic Data will end up in the return type as a type argument for Option. This means that the function may return Some of Data, which must implement the traits serde::de::DeserializeOwned + ?Sized or None.

Then we call from_value(), match it to handle a potential error and return our option types as the result of the match arm expressions.

#![allow(unused)]
fn main() {
pub fn extract_serialized_data<Data: serde::de::DeserializeOwned + ?Sized>(event: web_sys::Event) -> Option<Data> {  
    match serde_wasm_bindgen::from_value(extract_data(event)) {  
        Ok(data) => Some(data),  
        Err(_) => None  
    }  
}
}

When using this function we need to provide it the type for Data which we can do with our type argument syntax, for example:

#![allow(unused)]
fn main() {
component_custom_event::extract_serialized_data::<BLTSandwich>(event)
}

The ::<> is a turbofish and used to inject a concrete type as an argument for a generic.

Wrapping it up

Here we have a working example of the whole thing!

use leptos::*;  
use serde::{Serialize, Deserialize};  
  
mod component_custom_event {  
    use leptos::*;  
    use crate::wasm_bindgen::JsValue;  
  
    fn new(name: &str, payload: Option<JsValue>) -> Result<web_sys::CustomEvent, JsValue> {  
        let mut event_config = web_sys::CustomEventInit::new();  
        event_config.bubbles(true);  
        if let Some(data) = payload {  
            event_config.detail(&data);  
        }  
        web_sys::CustomEvent::new_with_event_init_dict(name, &event_config)  
    }  
    fn real_dispatch(name: &str, target_ref: NodeRef, payload: Option<JsValue>) -> Result<bool, JsValue> {  
        let event = new(name, payload);  
        let target = target_ref.get();  
  
        match (event, target) {  
            (Ok(event), Some(target)) => target.dispatch_event(&event),  
            (_, _) => Err(JsValue::null())  
        }    }  
    pub fn dispatch(name: &str, target_ref: NodeRef) -> Result<bool, JsValue> {  
        real_dispatch(name, target_ref, None)  
    }  
    pub fn dispatch_with_data(name: &str, target_ref: NodeRef, data: JsValue) -> Result<bool, JsValue> {  
        real_dispatch(name, target_ref, Some(data))  
    }  
    pub fn dispatch_with_data_serialized<T: serde::ser::Serialize + ?Sized>(  
        name: &str,  
        target_ref: NodeRef,  
        data: &T) -> Result<bool, JsValue> {  
        match serde_wasm_bindgen::to_value(data) {  
            Ok(data) => dispatch_with_data(name, target_ref, data),  
            Err(_) => Err(JsValue::null())  
        }    }  
    pub fn extract_data(event: web_sys::Event) -> JsValue {  
        let custom_event = event.unchecked_into::<web_sys::CustomEvent>();  
        custom_event.detail()  
    }  
    pub fn extract_serialized_data<Data: serde::de::DeserializeOwned + ?Sized>(event: web_sys::Event) -> Option<Data> {  
        match serde_wasm_bindgen::from_value(extract_data(event)) {  
            Ok(data) => Some(data),  
            Err(_) => None  
        }  
    }}  
  
#[derive(Serialize, Deserialize, Debug)]  
struct BLTSandwich {  
    bread: String,  
    lettuce: String,  
    tomato: String,  
    bacon: String,  
}  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <SandwichShopApp />  
        }    })}  
  
#[component]  
fn SandwichShopApp(cx: Scope) -> Element {  
    let log_order = |event: web_sys::Event| {  
        leptos::log!("Our custom event happened");  
        leptos::log!( "{:?}", component_custom_event::extract_serialized_data::<BLTSandwich>(event));  
    };  
    view! {  
        cx,  
        <BltSandwich on:orderSandwich=log_order />  
    }}  
  
#[component]  
fn BltSandwich(cx: Scope) -> Element {  
    let dom_node_ref = NodeRef::new(cx);  
  
    let trigger_order_sandwich_event = move |event| {  
        component_custom_event::dispatch_with_data_serialized(  
            "orderSandwich",  
            dom_node_ref,  
            &BLTSandwich {  
                bread: "canadian_rye".to_string(),  
                lettuce: "romaine".to_string(),  
                tomato: "black_krim".to_string(),  
                bacon: "farm_smoked_apple_bacon".to_string(),  
            },        );  
    };  
    view! {  
        cx,  
        <div _ref=dom_node_ref>  
           <h3>"BLT Sandwich"</h3>  
            <button on:click=trigger_order_sandwich_event>  
                "Order Sandwich"  
            </button>  
        </div>  
    }
}

Custom Event Data with Signals

What we know

  • Events allow us to signal changes on the client side (in browser)
  • Attaching handlers (listeners) to events in the DOM has non trivial performance cost
  • Custom events have the ability to add data to them in the form of details.
  • Serializing and deserializing data to transport it between JavaScript and WASM has a non trivial performance cost
  • Events can bubble up from the target (the dispatcher) node to be handled by parent/ancestral DOM node's handlers.
  • Event handlers can be passed down as properties to Leptos components

What we'll learn

  • How we can listen to signal value changes and respond with actions using effects

The lesson

Why bother?

It's possible to use events that exist inside your Rust application instead of relying heavily on the browser's event system. There are some major benefits that you receive by doing this.

First, you can use data in your events that isn't serializable. Recalll that JavaScript events have details, but the data assigned to it has to be able to be turned into a string. It has to be serializable. Functions and few other data types can not be serialized. Handling events in rust solves this problem.

Secondly, serializing and deserializing data is costly. If we're handling an event in rust, serializing the details (data/payload) and then dispatching it to handle it in rust a few ms later, we're better off to just keep it all in Rust.

Thirdly, passing data across the WASM boundary isn't very efficient in browsers yet.

Fourthly, I'm sure there are other reasons.

Our objectives

We'll use a sandwich shop as our example for this lesson. We'll aim to create a simple application that accepts an order which the system/application can choose to react to. Think of it like a server taking the order and relaying it to the kitchen.

Boilerplate

Let's start off with some basic components for our sandwich shop.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <SandwichShop />  
        }    
	})
}  
  
#[component]  
fn SandwichShop(cx: Scope) -> Element {  
  
    view! {  
        cx,  
        <div>  
            <Sandwich/>  
        </div>  
    }
}  
  
#[component]  
fn Sandwich(cx: Scope) -> Element{  
    view! {  
        cx,  
        <div>  
           <button>  
                "Order Sandwich"  
            </button>  
        </div>  
    }
}

We'll need something to capture the initial client side event. To do this we can add an event handler for the click event.

#![allow(unused)]
fn main() {
#[component]  
fn Sandwich(cx: Scope) -> Element{  
    let place_order = |_|{  
        leptos::log!("Place order");  
    };  
    view! {  
        cx,  
        <div>  
           <button on:click=place_order>  
                "Order Sandwich"  
            </button>  
        </div>  
    }
}
}

We're just echoing out "Place order" to the browser console on click to make sure the event is dispatching correctly

Shared data

Here's where things get interesting. We need some sort of shared space where we can write down that an order came in. We'll then make sure that a bell gets run to say, "order up" for the kitchen staff to check the order.

We'll use Leptos' reactive system to create that shared bit of data. Using signals we can read and write to the space with orders as the buttons are clicked.

We're going to simplify and ignore some edge cases and assume that an order can be fulfilled the second it comes in. This lesson is more about message orchestration than anything else.

With that in mind, if we create a signal, the setter could be called "new order" because it's adding the order to the shared/observed space. We can call the getter "last order" because the value of the shared space will always be the most recent just fulfilled order.

#![allow(unused)]
fn main() {
let (  
    last_order,  
    new_order  
) = create_signal(cx, None);
}

You'll note that I wrote None here because we're going to use an Option type for what the order is. In fact, it'll be Option<Sandwiches>;

#![allow(unused)]
fn main() {
enum Sandwich{  
    BLT
}
}

If we tried to compile our application Rust would complain. Currently rust doesn't know how much memory to allocate for create_signal because it's Some type isn't specified. We can add this to the None with our handy trubofish syntax.

#![allow(unused)]
fn main() {
let (  
    last_order,  
    new_order  
) = create_signal(cx, None::<Sandwich>);
}

Now we want to pass the new_order write signal to our Sandwich Leptos component. It's going to use this to place orders when the respective button is clicked.

#![allow(unused)]
fn main() {
#[component]  
fn SandwichShop(cx: Scope) -> Element {  
  
    let (  
        last_order,  
        new_order  
    ) = create_signal(cx, None::<Sandwich>);  
  
    view! {  
        cx,  
        <div>  
            <Sandwich new_order=new_order />  
        </div>  
    }}
}

And we'll add the property to the Sandwich component's function definition so that it can accept the write signal.

#![allow(unused)]
fn main() {
fn Sandwich(
	cx: Scope, 
	new_order: WriteSignal<Option<Sandwich>> 
) -> Element{
	// ... 
}
}

Note that new_order is of type WriteSignal which has a type argument of Option<Sandwich>

This looks a bit odd because the value and property are the same name on the Sandwich object. Leptos allows you to just write the property/name once if they're both the same. We can write

#![allow(unused)]
fn main() {
<Sandwich new_order /> 
}

Now we need to put that WriteSignal to use.

#![allow(unused)]
fn main() {
#[component]  
fn Sandwich(
	cx: Scope, 
	new_order: WriteSignal<Option<Sandwich>> 
) -> Element{  

	let place_order = move |_| {  
	    leptos::log!("Place order");  
	    new_order.set( Some(Sandwich::BLT) );  
	};
    
    view! {  
        cx,  
        <div>  
           <button on:click=place_order>  
                "Order Sandwich"  
            </button>  
        </div>  
    }
}
}

The new_order write signal enters the Sandwich component function and is moved into the place_order closure. This closure is run every time the click event is dispatched on the button. By doing so, it updates that shared order space with a sandwich!

Effects

Leptos has all sorts of tricks up its sleeve. One of them is create_effect. You can think of create_effect as an on-change event handler for Leptos's reactive system. It accepts a context and closure (callback) function as its two property arguments. Use of signals within the closure will flag the closre to run if their values change. The closure will observe the signals used in it.

We can create an effect to observe the last and new order signals as follows.

#![allow(unused)]
fn main() {
#[component]  
fn SandwichShop(cx: Scope) -> Element {  
  
    let (  
        last_order,  
        new_order  
    ) = create_signal(cx, None::<Sandwich>);
	
	create_effect(cx, move |_| {  
	    if let Some(sandwich) = last_order.get() {  
	        leptos::log!("A sandwich was ordered");  
	    }  
	});
	
	//...
}
}

The rust compiler will complain here because the enum doesn't support clone. The statement if let Some(sandwich) = last_order.get() is using the signal's get() method to return a value of Option<Sandwich>. It needs to clone the data to give you a copy of it.

To solve this problem we can allow rust to derive the clone trait for the Sandwich enum with :

#![allow(unused)]
fn main() {
#[derive(Clone)]
enum Sandwich{  
    BLT
}
}

We also want to be able to print this with debug formatting so we'll derive the debug trait too.

#![allow(unused)]
fn main() {
#[derive(Clone, Debug)]
enum Sandwich{  
    BLT
}
}

And just like that, we've got a working system:

use leptos::*;  
  
#[derive(Clone, Debug)]  
enum Sandwich{  
    BLT  
}  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <SandwichShop />  
        }
	})
}  
  
#[component]  
fn SandwichShop(cx: Scope) -> Element {  
  
    let (  
        last_order,  
        new_order  
    ) = create_signal(cx, None::<Sandwich>);  
  
    create_effect(cx, move |_| {  
        match last_order.get() {  
            Some(sandwich) => leptos::log!(
	            "A sandwich was ordered: {:?}", 
	            sandwich),  
            None => {}  
        }    
	});  
  
    view! {  
        cx,  
        <div>  
            <Sandwich new_order />  
        </div>  
    }}  
  
#[component]  
fn Sandwich(
	cx: Scope, 
	new_order: WriteSignal<Option<Sandwich>> 
) -> Element{  
    let place_order = move |_|{  
        leptos::log!("Place order");  
        new_order.set( Some(Sandwich::BLT) );  
    };  
    view! {  
        cx,  
        <div>  
           <button on:click=place_order>  
                "Order Sandwich"  
            </button>  
        </div>  
    }
}

Adding other sandwiches

If we want to add additional sandwiches, we can create additional enum variants.

#![allow(unused)]
fn main() {
#[derive(Clone, Debug)]  
enum Sandwich{  
    BLT,  
    Rubin,  
    PBandJ  
}
}

We'll add some additional sandwiches to our order menu:

#![allow(unused)]
fn main() {
view! {  
    cx,  
    <div>  
        <Sandwich new_order sandwich=Sandwich::BLT/>  
        <Sandwich new_order sandwich=Sandwich::Rubin/>  
        <Sandwich new_order sandwich=Sandwich::PBandJ/>  
    </div>  
}
}

And we'll add that new property 'sandwich' that we're using to configure the component.

#![allow(unused)]

fn main() {
#[component]  
fn Sandwich(
	cx: Scope, 
	new_order: WriteSignal<Option<Sandwich>>, 
	sandwich: Sandwich 
) -> Element{
	// ...
}
}

And, we'll use the new argument in our on click handler.

#![allow(unused)]
fn main() {
let place_order = move |_|{  
    leptos::log!("Place order");
    new_order.set( Some(sandwich) );  
};
}

The above won't work just yet though. When we move the sandwich argument into this closure, the rust compiler will complain. This closure is actually like a struct behind the scenes, with properties for the values moved into it. We need to clone sandwich into this struct so that the closure can guarantee that it doesn't have any ties to the outside scope. We solve this problem by calling clone on the sandwich. This will evaluate the value of Some() to a clone of the sandwich because the statement inside the parenthesis are evaluated first.

#![allow(unused)]
fn main() {
let place_order = move |_|{  
    leptos::log!("Place order");
    new_order.set( Some(sandwich.clone()) );  
};
}

Adding labels

Let's add some new properties for sandwiches for the labels.

#![allow(unused)]
fn main() {
view! {  
    cx,  
    <div>  
        <Sandwich new_order sandwich=Sandwich::BLT label="Bacon, Lettuce, and Tomato"/>  
        <Sandwich new_order sandwich=Sandwich::Rubin label="Rubin"/>  
        <Sandwich new_order sandwich=Sandwich::PBandJ label="Peanutbutter and Jelly"/>  
    </div>  
}
}

And then we'll add the label too the function properties and in the view! template:

#![allow(unused)]
fn main() {
#[component]  
fn Sandwich(
	cx: Scope, 
	new_order: WriteSignal<Option<Sandwich>>, 
	sandwich: Sandwich, 
	label: &'static str 
) -> Element{  

	let place_order = move |_|{  
        leptos::log!("Place order");  
        new_order.set( Some(sandwich.clone()) );  
    };  
    
    view! {  
        cx,  
        <div>  
           <button on:click=place_order>  
                "Order " {label}  
            </button>  
        </div>  
    }
    
}
}

It's worth noting that I added the static lifetime to the label so that rust knows the string won't be changing as the application runs. This is important because these component functions are more like setup functions and template builders. They are not render functions.

And like that, we have a pretty cool system that allows us to transmit a messsage up the chain! Pretty neat!

In the next lesson we'll buid on this with a more robust pattern.

Custom Event Data with Signals and Effects - Part 2

What we know

  • We can use signals as message busses and effects as message bus watchers to react to changes in our application

What we'll learn

  • How to create a state struct to hold application state and accept events to update the data in an application.

The lesson

In our previous lesson, our code looked like this:

use leptos::*;  
  
#[derive(Clone, Debug)]  
enum Sandwich{  
    BLT,  
    Rubin,  
    PBandJ  
}  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <SandwichShop />  
        }    })}  
  
#[component]  
fn SandwichShop(cx: Scope) -> Element {  
  
    let (  
        last_order,  
        new_order  
    ) = create_signal(cx, None::<Sandwich>);  
  
    create_effect(cx, move |_| {  
        match last_order.get() {  
            Some(sandwich) => {  
                leptos::log!("A sandwich was ordered: {:?}", sandwich);  
            },  
            None => {}  
        }    });  
  
    view! {  
        cx,  
        <div>  
            <Sandwich new_order sandwich=Sandwich::BLT label="Bacon, Lettuce. Tomato"/>  
            <Sandwich new_order sandwich=Sandwich::Rubin label="Rubin"/>  
            <Sandwich new_order sandwich=Sandwich::PBandJ label="Peanutbutter and Jelly"/>  
        </div>  
    }}  
  
#[component]  
fn Sandwich(
	cx: Scope, 
	new_order: WriteSignal<Option<Sandwich>>, 
	sandwich: Sandwich, label: &'static str 
) -> Element{  

	let place_order = move |_|{  
        leptos::log!("Place order");  
        new_order.set( Some(sandwich.clone()) );  
    };  

	view! {  
        cx,  
        <div>  
           <button on:click=place_order>  
                "Order " {label}  
            </button>  
        </div>  
    }
}

There are some problems with this approach. The application component has a lot of functionality rolled into it. It would be preferrable if we could split this stuff out so that it's a bit easier to see the application logic from it's user iterface.

First things first, let's split our application state out from the application Leptos component. The state of our application is a snapshot of the application's data.

We'll also make a piece of data in the state to hold the last_event. We'll also go so far as to make an enum that stores the possible events as well.

#![allow(unused)]

fn main() {
// Things that can happen in a sandwich shop
#[derive(Debug, Clone, Copy)]
enum Event {
	OrderSandwich(Sandwich),
	None
}

#[derive(Debug, Clone, Copy)] 
struct State { 
	last_event: Event
}
}

We'll need an easy way to setup a default state. We can implement the default trait for the State struct. Recall that traits are a specification of behaviours/capabilities of a type. Traits are also types. We can use traits as bounds for argument types by writing their names as the required types for parameters.

#![allow(unused)]
fn main() {
fn example_trait_requirement( my_parameter: SomeTrait) { 
	//...
}
}

Implementing the default trait on a type requires the following:

#![allow(unused)]
fn main() {
impl Default for State {  
    fn default() -> Self {  
        Self {  
            last_event: Event::None  
        }  
    }
}
}

We now have the ability to call State::default() and we'll receive a State struct value with last_event set to the Event enum variant of None. This is a bit more streamlined than dealing with the option type in our previous example.

We also want to be able to update our state. We are going to force the updating of state through a State update method, requiring an event. This pattern of adding constraint like, "You MUST have THIS to do THAT" is how we make stable applications. You'll see this enforced all over the place in Rust.

To add functions associated with the State struct, we can write impl (for implement) State (the name of the struct), and define a scope with curley braces to contain the implementations of our methods. We are not writing the implementation of a Trait, so we don't need to write impl TraitName for StructName, like we did with the Default trait. It's the same idea though.

Our update method will take a mutable reference to itself so that it can update its own data, and it takes an event which dictates how it's own data will be updated. We then match on the events and handle the updates accordingly. At the end, we'll update the last_event with the event used for the update so that we know what happened.

#![allow(unused)]
fn main() {
impl State {  
    fn update(&mut self, event: Event) {  
        match event {  
            Event::OrderSandwich(sandwich) => {  
                leptos::log!("A sandwich was ordered: {:?}", sandwich);  
            },  
            Event::None => {}  
        }        
        self.last_event = event;  
    }  
}
}

As cool as all of this is, we're no further ahead. As developers we have to be careful of things that look like cool patterns but don't add any extra functionality. It's easy to get caught up in what feels satisfying to write because it's clever. Often things that are mentally taxing to write or figure out are the most stimulating. Try to avoid this siren song. Err on the side of simplicity.

We're now going to hook this into our reactive system so that it makes a meaningful change and we'll review the complexity to see if we've simplified our system or made it more complex.

Let's dig in...

We know when we start our app up, we're going to need to initialize a state. We want the state to handle its own updates though and we do not want the state to be rewritten. For this reason, we'll crate a signal to store a reactive value of type State, but we're only going to grab the read signal.

We need a context/scope to create the signal, and our default values.

#![allow(unused)]
fn main() {
#[component]  
fn SandwichShop(cx: Scope) -> Element {  
    let (state, _) = create_signal(cx, State::default() );
}

This feels overly complicated. There's a lot happening here when I really want to just write, "Give me a State struct."

Let's change this to something like...

#![allow(unused)]
fn main() {
let state = State::new(cx);
}

Notice how we're distilling down a previously complicated statement into one that expresses exactly what we want.

Now we need to refactor our state struct to represent this. The first step is, let's just delete the whole impl Default for State block. We're not allowing people to create a default state anymore.

We do need to update our State implementations to include the addition of a new method:

#![allow(unused)]
fn main() {
impl State {  
  
    pub fn new(cx: Scope) -> ReadSignal<State> {  
        let init_state = Self {  
            cx,  
            last_event: Event::Init  
        };  
        let (state, _) = create_signal( cx, init_state );  
        state  
    }
	// ...
}
}

You can see where we took some of the complexity of things that happened in our application and pushed it into this method. It makes our SandwichShop Leptos component much more simple and clear.

We also need to update our struct's properties so that we can store a Scope within the state as well.

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Copy)]  
struct State {  
    cx: Scope,  
    last_event: Event  
}
}

Let's turn our eyes to this last_event property. It also needs to be made into a signal so that we can update it and respond reactively. We'll update the struct literal syntax with a create signal call for last_event's value.

#![allow(unused)]
fn main() {
pub fn new(cx: Scope) -> ReadSignal<State> {  
    let init_state = Self {  
        cx,  
        last_event: create_signal( cx, Event::Init)  
    };  
    let (state, _) = create_signal( cx, init_state );  
    state  
}
}

We also need to update our struct to match this new value type.

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Copy)]  
struct State {  
    cx: Scope,  
    last_event: last_event: (ReadSignal<Event>, WriteSignal<Event>)
}
}

The last piece of this refactor is in the State update method. We were storing the event we were responding as the value of last_event. The type of this property on the State struct has changed. It's not an Event anymore.

#![allow(unused)]
fn main() {
pub fn update(&mut self, event: Event) {  
        match event {  
            Event::OrderSandwich(sandwich) => leptos::log!("A sandwich was ordered: {:?}", sandwich),  
            _ => {}  
        }        
        self.last_event = event;  
    }  
}

We need to change:

#![allow(unused)]
fn main() {
self.last_event = event;  
}

To the following:

#![allow(unused)]
fn main() {
self.last_event.1.set( event );
}

last_event is a tuple with two values as index 0 and 1. Index 1 contains the write signal, which has a set method. We're using that value's set method to update the reactive value of the signal.

This feels unclear to me so i'll rewrite it as:

#![allow(unused)]
fn main() {
	self.update_last_event( event );
}

And create a private method on self that hides the read/write implementation feature.

#![allow(unused)]
fn main() {
fn update_last_event( &mut self, event: Event ) {  
    self.last_event.1.set(event );  
}
}

You may notice that the previous methods had the pub keyword before the fn keyword. Excusion of the pub keyword for update_last_event prevents the method from being called by external callers. Only methods on the State struct can call update_last_event.

Updating the effect

Our previous example had an effect that would respond to changes to our application's last order.

#![allow(unused)]
fn main() {
create_effect(cx, move |_| {  
    match last_order.get() {  
        Some(sandwich) => leptos::log!(
	        "A sandwich was ordered: {:?}", 
	        sandwich
			),  
        None => {}  
    }
});
}

We actually don't need to use this anymore because we've got a state value that we can directly update and react to, all in one contained struct.

Updating the sandwich components

We no longer need to pass more complicated handlers on down. We can just pass state.

#![allow(unused)]
fn main() {
#[component]  
fn SandwichShop(cx: Scope) -> Element {  
    let state = State::new(cx);  
    view! {  
        cx,  
        <div>  
            <Sandwich state sandwich=Sandwich::BLT label="Bacon, Lettuce. Tomato"/>  
            <Sandwich state sandwich=Sandwich::Rubin label="Rubin"/>  
            <Sandwich state sandwich=Sandwich::PBandJ label="Peanutbutter and Jelly"/>  
        </div>  
    }}
}

Recall that our State::new() gives us a read signal so that we can easily pass it around our system. We need to update our Sandwich components to match with a new property type for state:

#![allow(unused)]
fn main() {
	state: ReadSignal<State>
}

And we'll update the place_order closure so that it calles an update method on the actual state object.

#![allow(unused)]
fn main() {
#[component]  
fn Sandwich(
	cx: Scope, 
	state: ReadSignal<State>, // <- here
	sandwich: Sandwich, 
	label: &'static str 
) -> Element{

	let place_order = move |_|{  
	    state.get().update(Event::OrderSandwich(sandwich))  
	};
	//...
}
}

What remains

When I had set out to do this refactor I was thinking that we'd need the last_event as a signal to respond to, so that we could build reactvity off of it with create_effect(). The reality is that in this example, we don't even actually need that. :)

My hope is that this lesson gives you some insight into the thought process of refactoring and adding constraint to changes.

Here's what the finished code looks like:

use leptos::*;  
  
#[derive(Debug, Clone, Copy)]  
enum Sandwich{  
    BLT,  
    Rubin,  
    PBandJ  
}  
  
#[derive(Debug, Clone, Copy)]  
enum Event {  
    OrderSandwich(Sandwich),  
    Init  
}  
  
#[derive(Debug, Clone, Copy)]  
struct State {  
    cx: Scope,  
    last_event: (ReadSignal<Event>, WriteSignal<Event>)  
}  
  
impl State {  
  
    pub fn new(cx: Scope) -> ReadSignal<State> {  
        let init_state = Self {  
            cx,  
            last_event: create_signal( cx, Event::Init)  
        };  
        let (state, _) = create_signal( cx, init_state );  
        state  
    }  
  
    pub fn update(&mut self, event: Event) {  
        match event {  
            Event::OrderSandwich(sandwich) => leptos::log!("Yay! A sandwich was ordered: {:?}", sandwich),  
            _ => {}  
        }        
        self.update_last_event(event );  
    }  
  
    fn update_last_event( &mut self, event: Event ) {  
        self.last_event.1.set(event );  
    }  
  
}  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <SandwichShop />  
        }
	})
}  
  
#[component]  
fn SandwichShop(cx: Scope) -> Element {  
    let state = State::new(cx);  
    view! {  
        cx,  
        <div>  
            <Sandwich state sandwich=Sandwich::BLT label="Bacon, Lettuce. Tomato"/>  
            <Sandwich state sandwich=Sandwich::Rubin label="Rubin"/>  
            <Sandwich state sandwich=Sandwich::PBandJ label="Peanutbutter and Jelly"/>  
        </div>  
    }}  
  
#[component]  
fn Sandwich(
	cx: Scope, 
	state: ReadSignal<State>, 
	sandwich: Sandwich, 
	label: &'static str 
) -> Element{  

	let place_order = move |_|{  
        state.get().update(Event::OrderSandwich(sandwich))  
    };  
    
    view! {  
        cx,  
        <div>  
           <button on:click=place_order>  
                "Order " {label}  
            </button>  
        </div>  
    }
    
}

Forms

What we know

  • We can capture events and respond to them
  • Signals allow us to persist data across events

What we'll learn

  • How to respond to multipart form data

The Lesson

Back in the day we used to interact with websites by submitted form data to a server with a requested resource (like a specific page). The page would render, often using or processing the form data that was sent with it, would generate HTML, and then provide us a response. This is how the majority of the web still works to this day!

We're going to replicate a similar data flow so that you can collect sets of data using forms, but process them all on the client (in the browser).

We'll start with a Rad app component and some boiler plate, mounting it to the body:

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <RadApp />  
        }  
    })  
}  
  
#[component]  
fn RadApp(cx: Scope) -> Element {  
    view! {  
        cx,  
        <form>  
        </form>  
    }  
}

Our form has no submit button, so we'll add that in:

#![allow(unused)]
fn main() {
#[component]  
fn RadApp(cx: Scope) -> Element {  
    view! {  
        cx,  
        <form>  
	        <input type="submit" value="Submit"/>
        </form>  
    }  
}
}

We now have a basic form that we can submit. If we click submit, the form takes it data (of which there is none) and posts it to the form's action destination, which if non will default to the current page. This effectively looks like the page has reloaded even though it's actually being re-requesed with the updated form data.

Let's add a text field so that we can submit some data.

#![allow(unused)]
fn main() {
<form>  
	<input type="text"  
	    name="fav_thing_to_paint"  
	    placeholder="Your fav thing to paint..."  
	    value=""  
	/>
	<input type="submit" value="Submit"/>
</form>  
}

I broke the lines up here purely for formatting. HTML elements don't care about line breaks inbetween attributes.

If we type in something into the text field and hit submit, you'll see the page load, and the field gets reset. What is happening here is that the form is taking its form data and submitting it to the form's action url as part of a new request. The action property is a property on the form element telling the form where to send its data. It defaults to the current page that it's on if the action is not set.

If we had <form action="https://www.rust-lang.org"> and we clicked submit, the form would send our data to rust-lang.org! And isntead of looking like a page reload, we'd see the rust-lang.org home page.

We always have to remember that here we're just making a more complicated request with some configuration (the form data) and our brower is rendering the response.

In old school website, a server would render a template and process form submissions for that template at the same time. If a request came in without form data, the fields would be blank. If a request came in that had posted data (submitted via the form submission), whoever coded the form template which is processed on the server could pluck out that posted data, and enter in the submitted values as the values of the input fields in the form. This way form submission data doesn't get erased if, for example, some form validation failed. The data just gets passed back and forth. It is not persisting anywhere.

Fun Fact! Forms default to 'post' as their method of sending data. You can change this method to 'get' and your data will become query string variables.

Responding to the event

Let's add a from handler for the submit event. But this point, things should look pretty familiar.

#![allow(unused)]
fn main() {
#[component]  
fn RadApp(cx: Scope) -> Element {  
	
	// We create a form handler
    let form_handler = |_|{  
        leptos::log!("The form was submitted");  
    };  

	// And we added it with `on:submit` to the form element
    view! {  
        cx,  
        <form on:submit=form_handler>  
            <input type="text"  
                name="fav_thing_to_paint"  
                placeholder="Your fav thing to paint..."  
                value=""  
            />  
            <input type="submit" value="Submit" />  
        </form>  
    }  
}
}

Preventing the form from sending

When we click submit, the form submits so quickly that we can't even see the form_handler's message. Also, we're working on a client side application in this context, so we don't want this page to reload and rerender. We want to prevent the default behaviour.

To do this we need to actually do something with the event in our event handler that we've been ignoring this whole time. Let's change it from an underscore to something easy to understand, like submission_event.

#![allow(unused)]
fn main() {
	let form_handler = |submission_event|{  
        submission_event.prevent_default();
    };  
}

The above won't work though, because the closure doesn't know where it will be used. Rust doesn't know that this closure will be called from the event system and that the first argument will be an event. To fix this problem we'll give it a type web_sys::SubmitEvent.

#![allow(unused)]
fn main() {
	let form_handler = |submission_event: web_sys::SubmitEvent|{  
        submission_event.prevent_default();
    };  
}

Calling prevent_default() on the submit event will prevent the form from actually being submitted. We've short circuited the default behaviour!

Sometimes I find that I don't know exactly what to write for the type so I'll put in some form of type, try to compile the application, and then let Rust's compiler tell me what was supposed to be there. It's right most of the time.

Capturing form data

Events have the source stored at a proeprty called target. We can grab the element that emitted the event by calling it.

#![allow(unused)]
fn main() {
let form_handler = |submission_event: web_sys::SubmitEvent|{  
	submission_event.prevent_default();
	let form = submission_event.target();
};  
}

We don't know for sure if the target will actually be a proper element. The return type of the target method is Option<EventTarget>. As we learned in the previous lessons, we can match on the form's value to account for Some(form) or None.

#![allow(unused)]
fn main() {
let form_handler = |submission_event: web_sys::SubmitEvent|{  
    submission_event.prevent_default();  
    match submission_event.target() {  
        None => {},  
        Some(form_event_target) => {  
            // we need to do things here
		}  
    }  
};
}

form_event_target doesn't have a specific type yet, so we need to explicitly tell Rust, "Hey, this is a HtmlFormElement" which we need to derive a form data object.

It should be noted that it took research to sort through this which is why I'm presenting it to you. This way you have one place to look it all up. :)

We're going to add the following line once we've destructured our form_event_target.

#![allow(unused)]
fn main() {
let form_element = form_event_target.unchecked_ref::<web_sys::HtmlFormElement>();  
}

Here we take our target, which is untyped and called unchecked_ref() to type it. We add a turbofish ::<SomeType> between the name of the method and the parenthesis to specify the generic type. In this case, it's the type that it will become when we call unchecked_ref on it.

This will fail to work, and Rust's compiler wil complain. If we look at the definition of web_sys::HtmlFormElement we'll see that it needs to be set as a feature dependency in cargo.toml.

We'll add the following to our cargo.toml to ensure that websys uses the two features we'll need:

[dependencies.web-sys]  
features = [ "FormData", "HtmlFormElement"]

Next we'll setup form data which will use data from the form element.

#![allow(unused)]
fn main() {
let form_data = web_sys::FormData::new_with_form(&form_element);
}

This returns a result type, with its return type being Result<FormData, JsValue>. As we've sen before, we'll need to destructure it to pull out the value that is of type FormData.

#![allow(unused)]
fn main() {
let form_data = web_sys::FormData::new_with_form(&form_element);  
match form_data{  
    Err(_) => {},  
    Ok(data) =>{  
        // the data here is a FormData thing.
    }  
}
}

FormData has some useful methods, one of which we can use to extract values from fields by name.

#![allow(unused)]
fn main() {
let fav_thing = data.get("fav_thing_to_paint").as_string();
}

Here we ask the form data to give us its value for "fav_thing_to_paint" as a string value. This is still an option, so we'll have to deal with Some(the_value) or None.

I'm specifically showing you pattern matchin as the simplest way to deal with these result and option types. There are many shorter ways of doing that which you will learn later.

It is also possible to inline the match statement and avoid assigning the temporary variable. We could write the following:

#![allow(unused)]
fn main() {
let fav_thing = data.get("fav_thing_to_paint").as_string();  
match fav_thing {  
    Some(actual_fav_thing_value) => {},  
    None => {}
}
}

or

#![allow(unused)]
fn main() {
match data.get("fav_thing_to_paint").as_string() {  
    Some(fav_thing) => {},  
    None => {}
}
}

The whole thing all together looks like this:

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <RadApp />  
        }  
    })  
}  
  
#[component]  
fn RadApp(cx: Scope) -> Element {  
    let form_handler = |submission_event: web_sys::SubmitEvent|{  
        submission_event.prevent_default();  
        match submission_event.target() {  
            None => {},  
            Some(form_event_target) => {  
                let form_element = form_event_target.unchecked_ref::<web_sys::HtmlFormElement>();  
                let form_data = web_sys::FormData::new_with_form(&form_element);  
                match form_data{  
                    Err(_) => {},  
                    Ok(data) =>{  
                        match data.get("fav_thing_to_paint").as_string() {  
                            Some(fav_thing) => {  
                                leptos::log!("{:?}", fav_thing);  
                            },  
                            None => {}  
                        }  
                    }  
                }  
            }  
        }  
    };  
    view! {  
        cx,  
        <form on:submit=form_handler>  
            <input type="text"  
                name="fav_thing_to_paint"  
                placeholder="Your fav thing to paint..."  
                value=""  
            />  
            <input type="submit" value="Submit" />  
        </form>  
    }  
}

Adding signals

We can now create a signal and use it to store the posted/submitted data.

#![allow(unused)]
fn main() {
let (last_fav_thing, set_last_fav_thing) = create_signal(cx, String::new());
}

We will add move to the handler, so that we can move the signal into it:

#![allow(unused)]
fn main() {
let form_handler = move|submission_event: web_sys::SubmitEvent|{
}

And we'll store the value using the signal:

#![allow(unused)]
fn main() {
set_last_fav_thing(fav_thing);
}

The last piece is displaying the last submission in our view! template:

#![allow(unused)]
fn main() {
<p>"Your last fav thing was: " {last_fav_thing}</p>
}

All togehter we have a nice example of how to collect form data so that we can work with it!

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <RadApp />  
        }  
    })  
}  
  
#[component]  
fn RadApp(cx: Scope) -> Element {  
  
    let (last_fav_thing, set_last_fav_thing) = create_signal(cx, String::new());  
  
    let form_handler = move|submission_event: web_sys::SubmitEvent|{  
        submission_event.prevent_default();  
        match submission_event.target() {  
            None => {},  
            Some(form_event_target) => {  
                let form_element = form_event_target.unchecked_ref::<web_sys::HtmlFormElement>();  
                let form_data = web_sys::FormData::new_with_form(&form_element);  
                match form_data{  
                    Err(_) => {},  
                    Ok(data) =>{  
                        match data.get("fav_thing_to_paint").as_string() {  
                            Some(fav_thing) => {  
                                set_last_fav_thing(fav_thing);  
                            },  
                            None => {}  
                        }  
                    }  
                }  
            }  
        }  
    };  
    view! {  
        cx,  
        <form on:submit=form_handler>  
            <p>"Your last fav thing was: " {last_fav_thing}</p>  
            <input type="text"  
                name="fav_thing_to_paint"  
                placeholder="Your fav thing to paint..."  
                value=""  
            />  
            <input type="submit" value="Submit" />  
        </form>  
    }  
}

Storing data on the client

Web applications have the ability to save data to/in the browser.

Available tools

Example usage

  • Web Storage
    • Non-sensitive information
    • Application settings
    • Application state changes for offline usage
  • Cookies
    • Session or user information that may change the server's respose
  • IndexedDB
    • Store large amounts of data in object storage stores that can be queried with optimiztions for reading/writing

Comparison

Session StorageLocal StorageCookies
Deleted when browser data is cleared
Can be modified outside of your application
Deleted when browser is closed
Sent with every web request

Caveats

Persistance: We can not always guarantee that data stored on the client (in the browser) will persist. Users are in control of clearing browser caches and data stores.

Security: Local storage and cookie data can be easily read by anyone using a web brower's development tools. There are no restrictions preventing third part scripts from accessing local storage or cookies as well.

Guarantees: The lack of persistance and security means that we should not assume integrity of data stored in the client.

Additional resources

Web Storage / Local Storage

For more information visit the MDN Web Storage API documentation

What we know

  • How to setup basic event handlers in Leptos

What we'll learn

  • How to store and retrieve data from a domain's local storage in the client

What's missing

  • Type safety guarantees for non-string types
  • Session storage and local persistant storage

Caveats

  • Local storage can be modified by users and other applications running on the same domain. As with pretty much everything happening on the client, you can't trust it.

The lesson

Web storage allows us to store data in the browser that will live for the duration that the browser is open (session storage) or will persist until the browser's data is cleared (local storage).

How these differ from cookies:

  • Larger amounts of data can be stored
  • The api to interact with them is easier to use. The web storage api is much newer than cookies.
  • They're not sent to the server when making new requests.

In this lesson we're going to initialize a local storage value, apply some modifications to it, and read the value. Our example will be a counter.

Let's start with a basic client side leptos app that has a button to initialize our local storage value. Currently we just have a log message that will print a message to the browser's console when the button is clicked. This way we can confirm the handler is working.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <App />  
        }
	})
}  
  
#[component]  
fn App(cx: Scope) -> Element {  
    let initialize_value = |_|{  
      leptos::log!("Initialize a value in local storage");  
    };  
    view! {  
        cx,  
        <div>  
            <button on:click=initialize_value>
	            "Initialize value"
			</button>  
        </div>  
    }
}

Accessing the Web Storage API

We will need to find a way to call out to the Web Storage API through web_sys. The first step is finding out where web storage exists in JavaScript in the browser. The MDN Web Storage API documentation state, "These mechanisms are available via the Window.sessionStorage and Window.localStorage properties."

Leptos provides us with a window() function which will efficiently return a web_sys::Window, allowing us to communicate to the browser's window.

If we look at the web_sys::Window documentation, we'll see that there is a local_storage method!

Calling local_storage returns a Result<Option<web_sys::Storage>, web_sys::JsValue>. We'll need to get the Ok(Some(storage)) from it (Ok because of the result which contains Some because of the option). Once we do, we'll get that web_sys::Storage, which we can work with. The web_sys::Storage struct's documentation enumerates all of the methods we can call, including get() and set()!

Chaining Unwraps

You can use unwarps to extract the Ok and Some like this:

#![allow(unused)]
fn main() {
let storage = window().local_storage().unwrap().unwrap();
}

The problem is that upwrap() will throw a panic if it's the Err or None variants of Result and Option respectively. We don't want our application to panic!

Nested Matches

We can use pattern matching as a potential solution:

#![allow(unused)]
fn main() {
match window().local_storage() {
	Ok( maybe_some_storage ) => match maybe_some_storage {
		Some( storage ) => {
			// Do your stuff wth storage
		},
		None => {}
	},
	Err() => {}
}
}

This solution doesn't panic, which is great. But it does put our code in deeply nested scopes which makes it hard to read.

Assigning the value of a match

What we're looking to do is assign the value to storage if it can be retrieved from the nested Result/Option, otherwise we'll do nothing. Do keep in mind that most applications will want to do something in the event that expected behaviour can't be followed.

A match statemetn will evaluate to the value of its matched expression. We can assign that value to a variable!

Currently that value is wrapped in an Option, which is then wrapped in a Result. We are able to use the same nesting of our return types to create a patterns to extract the value we're looking for. By combining these we can say, "If local_storage() returns an Ok that has Some storage, let the value of the match statement be storage." The other pattern marked with an underscore (_) indicates a catch-all. Anything that doesn't match what we want will return, breaking out of our closure!

#![allow(unused)]
fn main() {
let storage : web_sys::Storage = match window().local_storage() {  
    Ok(Some(storage)) => storage,  
    _ => return   
};
// We will only run code here if storage was able to be unwrapped by the match
}

I added the web_sys::Storage type to make this more clear, but Rust will infer the type. You do not need to write it.

Working with the web storage API

Setting a value

We can now call the Storage api through our storage variable (note the lowercase 's'. Storage is the struct/type, storage is our value).

Here we are assigning (setting) the key "my-counter" with a value of 0.

#![allow(unused)]
fn main() {
storage.set("my-counter", &0.to_string());
}

It's important to note that the web storage api stores strings. We can represent numbers and complex data in string form, being as the numerical character or as serialized data respectively. Rust requires that we convert our integer 0 to a string with the to_string() method. This provides us with an owned string. As per the documentation set is looking for a reference to a string, a string slice (&str). We can meet these requirements by prefixing the whole thing with an ampersand &.

Retrieving a value

We can use the get method to retrieve a value from a given key.

#![allow(unused)]
fn main() {
let my_counter_value = storage.get("my-counter");
}

The return type of get is Result<Option<String>,JsValue>. We can use unwrap_or_default to safely unwarp or fail.

#![allow(unused)]
fn main() {
let my_counter_value : String = storage  
    .get("my-counter")   // at this point we have a Result<Option<String>,JsValue>
    .unwrap_or_default() // Gives us Option<string> or the default value
    .unwrap_or_default(); // Gives us string or the default value, an empty string
}

In our case, we're using a number, so we'll need to parse it.

#![allow(unused)]
fn main() {
let my_counter_value = storage  
    .get("my-counter")  
    .unwrap_or_default()  
    .unwrap_or_default()  
    .parse::<i8>()  // attempt to parse the string as an 8 bit integer
    .unwrap_or_default(); 
    // ^ Return the Ok result, which is an 8 bit integer 
	//   or if there was a parse error, return the default value for an 8 bit integer
}

We can also write this as follows.

#![allow(unused)]
fn main() {
let my_counter_value : i8 = storage  
    .get("my-counter")  
    .unwrap_or_default()  
    .unwrap_or_default()  
    .parse()  
    .unwrap_or_default();
}

We could rewrite this as a a match statement. Here's an example with a little twist. We're specifying what the fallback value should be in a more visible way:

#![allow(unused)]
fn main() {
let my_counter_value: i8 = match storage.get("my-counter") {  
    Ok(Some(value)) => value.parse().unwrap_or(0),  
    _ => 0  
};
}

Making a module

We can wrap these two bits of functionality into a nice little module for reuse:

#![allow(unused)]
fn main() {
mod local_storage {  
  
    use leptos::*;  
  
    pub fn set(key : &str , val : &str ) {  
        let storage = match window().local_storage() {  
            Ok(Some(storage)) => storage,  
            _ => return  
        };  
        storage.set(key, val);  
    }  
  
    pub fn get(key : &str ) -> String {  
        let storage = match window().local_storage() {  
            Ok(Some(storage)) => storage,  
            _ => return "".to_string()  
        };  
  
        match storage.get(key) {  
            Ok(Some(val)) => val,  
            _ => "".to_string()  
        }
	}
	
}
}

Again it is important to note that we are not handling any errors here.

Calls to our local storage module are all nicely cleaned up:

#![allow(unused)]
fn main() {
// set a value
local_storage::set( "my-counter", &22.to_string() );  

// get a value
let v: i8 = local_storage::get("my-counter").parse().unwrap_or_default();
}

It would be great to avoid this whole parse and unwrap business as well. Let's see if we can't clean that up even more.

We'll need a generic type here. I'm going to use Val because it'll connect with val (the actual value). Many people use T. The letter doesn't matter. We'll provide it as a type argument by adding <Val> after get and before the parameter lsit. Then we'll set the return type to be of type Val as well, with -> Val after the parameter list. I'd like to be able to explicitly set the default value, so I'll add that as a parameter called default of type Val. The only thing left to do is add the default in where we had empty strings before.

#![allow(unused)]
fn main() {
pub fn get<Val>(key : &str, default: Val ) -> Val {  
    let storage: web_sys::Storage = match window().local_storage() {  
        Ok(Some(storage)) => storage,  
        _ => return default  
    };  
  
    match storage.get(key) {  
        Ok(Some(val)) => val.parse().unwrap_or( default ),  
        _ => default  
    }  
}
}

You might think that we'd need to provide Val in a turbofish for val.parse::<Val>() but we don't. Rust's compiler is smart enough (so darn smart). It knows that the final match statement doesn't end in a semicolon, so it must be the final expression. The result of the match will be our return value. This must be a Val type. It knows that if it's going to parse it has to parse to a Val value type!

This isn't quite there yet though. We need to add some type bounds for Val. We can't just accept anything. We want to only acccept thing that are of type Val if they can actually be parsed from a string. We can do this by adding the trait bound to the generic <Val: std::str::FromStr>. This means that whatever type Val is, it must implement the std::str::FromStr trait.

#![allow(unused)]
fn main() {
pub fn get<Val: std::str::FromStr>(key : &str, default: Val ) -> Val {  
    let storage: web_sys::Storage = match window().local_storage() {  
        Ok(Some(storage)) => storage,  
        _ => return default  
    };  
  
    match storage.get(key) {  
        Ok(Some(val)) => val.parse::<Val>().unwrap_or( default ),  
        _ => default  
    }  
}
}

Actually, while we're in here, let's make the local storage setter more flexible too. By adding a generic Val type we can call to_string() and turn it into a reference within this function. We do need to add constraints to the function. Val: std::fmt::Display guarantees that we can call to_string() on whatever type Val is.

#![allow(unused)]
fn main() {
// The function definition's type constraint can also be written with a 
// where keyword after the parameter list as follows.
// pub fn set<Val>(key : &str , val : Val ) where Val: std::fmt::Display {  
pub fn set<Val: std::fmt::Display>(key : &str , val : Val ) {  
    let storage: web_sys::Storage = match window().local_storage() {  
        Ok(Some(storage)) => storage,  
        _ => return  
    };  
    storage.set(key, &val.to_string());  
}
}

Now, as you look at this, you should be thinking, "But wouldn't I want to know if I failed to read a value? Won't this module make it look like local storage is working correctly even if it's not?!" Use the knowledge you've gained in this lesson to refactor the module so that your code expresses the behaviour of your application. Think critically about where failures are important to note and handle, and where they're not.

The final code

Here's the code all wrapped up and rolled together.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <App />  
        }
	})
}  
  
  
mod local_storage {  
    use leptos::*;  
  
    pub fn set<Val: std::fmt::Display>(key: &str, val: Val) {  
        let storage: web_sys::Storage = match window().local_storage() {  
            Ok(Some(storage)) => storage,  
            _ => return  
        };  
        storage.set(key, &val.to_string());  
    }  
  
    pub fn get<Val: std::str::FromStr>(key: &str, default: Val) -> Val {  
        let storage: web_sys::Storage = match window().local_storage() {  
            Ok(Some(storage)) => storage,  
            _ => return default  
        };  
  
        match storage.get(key) {  
            Ok(Some(val)) => val.parse().unwrap_or(default),  
            _ => default  
        }  
    }}  
  
#[component]  
fn App(cx: Scope) -> Element {  
    let initialize_value = |_| {  
        local_storage::set("my-counter", 0);  
        leptos::log!("Init counter to {}", local_storage::get("my-counter", 0));  
    };  
  
    let increment_value = |_| {  
        let value: i32 = local_storage::get("my-counter", 0);  
        local_storage::set("my-counter", value.saturating_add(1));  
        leptos::log!("Increment counter to {}", local_storage::get("my-counter", 0));  
    };  
  
    let decrement_value = |_| {  
        let value: i32 = local_storage::get("my-counter", 0);  
        local_storage::set("my-counter", value.saturating_sub(1));  
        leptos::log!("Decrement counter to {}", local_storage::get("my-counter", 0));  
    };  
  
    view! {  
        cx,  
        <div>  
            <button on:click=initialize_value>"Initialize value"</button>  
            <button on:click=increment_value>"+"</button>  
            <button on:click=decrement_value>"-"</button>  
        </div>  
    }
}

Cookies

What we'll learn

  • What cookies are
  • How we can set them
  • How we can detect cookie changes

What's missing

  • Cookie paths, lifetimes, and advanced configuration
  • Type safety guarantees for non-string types
  • Session and persistant cookie types

Caveat

There are many additional options available when setting cookies. This lesson is intended to give you a cursory understanding of how they're written and stored but it is not exhaustive.

The lesson

Cookies are a client storage tool. We can create a cookie with a specific name and assign it a string value. Recall that many numeric (int, float, bool) or complex (structs) can be represented as strings through the user of the characters that make up their numbers, or through serialization.

One important thing to remember is that cookies are sent as part of each web request for your application's domain.

To exemplify how we can write to cookies we'll build a little text to cookie value input box. Typing in it will update a corresponding cookie.

Capturing Input

We'll start with a simple Leptos application component called "App" which contains a text field.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <App />  
        }    
	})
}  

#[component]  
fn App(cx: Scope) -> Element {  
    view! {  
        cx,  
        <div>  
            <input  
				name="my_input"  
	            type="text"  
	         />  
        </div>  
    }
}

Now let's udpate it with that on key up event handler.

use leptos::*;  
use web_sys::KeyboardEvent;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <App />  
        }    
	})
}  

#[component]  
fn App(cx: Scope) -> Element {  
    let write_value_to_cookie = |e:KeyboardEvent|{  
        leptos::log!("You pressed a key!");  
    };  
    view! {  
        cx,  
        <div>  
            <input  
	            name="cookie_input"  
		        type="text"  
	            placeholder="Type text and I'll update a cookie!"  
	            on:keyup=write_value_to_cookie  
	         />  
        </div>  
    }}

I used the keyup event becuase change events only fire when focus is taken away from the input area (you click elsewhere, press tab, or esc), and keydown will only fire when a key is pressed but that happens before the value of the input field is updated. If we did that we'd always be one key stoke behind.

Note that we imported the KeyboardEvent from web_sys with:

#![allow(unused)]
fn main() {
use web_sys::KeyboardEvent;
}

And we added that type to the event handler closure.

#![allow(unused)]
fn main() {
let write_value_to_cookie = |e: KeyboardEvent| {  
	// ...
};
}

Now we'll pull the value out of the keyboard event and log it to the console.

#![allow(unused)]
fn main() {
let write_value_to_cookie = |e: KeyboardEvent| {  
    let input: HtmlInputElement = e.target()
	    .unwrap()
	    .unchecked_into();  
    leptos::log!("{:?}", input.value());  
};
}

e.target(), returns a result type that we know will have a TargetElement. We call unwrap to get the value out of the Result. Then we call unchecked_into() on the TargetElement type. Rust will see that we specified the destination type for input as HtmlInputElement. It will use this as the type parameter for unchecked_into(), casting the TargetElemenet as a HtmlInputElement. This is identical to not providing a type for input and writing, unchecked_into::<HtmlInputElement>(), using the turbofish syntax (::<>).

For the above to work, we need to bring the HtmlInputElement struct into scope as well with an updated use statement. We can use the destructuring syntax to update our previous statement.

#![allow(unused)]
fn main() {
use web_sys::{KeyboardEvent, HtmlInputElement};
}

Writing to cookies

The browser's cookie api is a property of the document object. If we want to call things on document, we'll need to use web_sys to get a reference to it. Thankfully Leptos provides a function which allows us to grab the document.

#![allow(unused)]
fn main() {
let document = document();
}

It's recommended to use this Leptos function over web_sys because Leptos will store the reference in WASM to improve performance.

We'll need to give Rust a bit of help here to identify the type.

#![allow(unused)]
fn main() {
let doc: HtmlDocument = document().unchecked_into();
}

The struct HtmlDocument is hidden behind a feature flag. To enable this feature we can add the following to our cargo.toml.

[dependencies.web-sys]  
features = [ "HtmlDocument" ]

We can now extract our cookie data from the the HtmlDocument. I had a quick look over at the web_sys documentation to confirm which method exists on the HtmlDocument struct that'll allow me to set a cookies value. It's set_cookie() We can get the cookie value via get_cookie() too.

#![allow(unused)]
fn main() {
doc.set_cookie("some data");
doc.set_cookie("my-key=first-value");
doc.set_cookie("my-key=second-value");

let cookie = doc.cookie().unwrap();
leptos::log!("{:?}", cookie );
}

The above prints my-key=second-value; some data= to the console.

What's interesting about set_cookie and the cookie api, is that it will parse the key name and value to make sure the correct cookie is updated.

Reading cookies

As we've seen above, we can read the complete text that makes up the cookie value by calling cookie() on an HtmlDocument.

#![allow(unused)]
fn main() {
let doc: HtmlDocument = document().unchecked_into();
let raw_cookie_data = doc.cookie().unwrap_or_default();
}

We'll need to parse that string into actual key=>value pairs.

We'll first need to split these into individual cookies. We saw that the delimeter was a semicolon and space. We can call split() on the raw cookie data to break the string up.

Then we call collect() to turn it into an vector.

#![allow(unused)]
fn main() {
let kvp_strings: Vec<&str> = raw_cookie_data
	.split("; ")
	.collect();
}

This provides us with an array of key value pair strings.

#![allow(unused)]
fn main() {
["my-key=second-value", "some data="]
}

We need to add another step here. We need to split those strings by =. We'll add a map method call after the split. The map method will apply a function to each item. In this case, we're spliting the strings and returning the result as a Vec<&str>, a vector of string references.

#![allow(unused)]
fn main() {
let raw_cookie_data: String = doc
	.cookie()
	.unwrap_or_default();  
	
let key_value_pairs: Vec<Vec<&str>> = cookie  
    .split("; ")  
    .map(|kvp_string|{  
        kvp_string.split('=').collect()  
    })
	.collect();  
	
leptos::log!("{:?}", key_value_pairs );
}

We now have a multidimensional array but it's not very usable. We can't check to see if a value is set. Let's turn this multidimensional vector into a hash map.

We need to add a use statement to import the HashMap type into scope.

We'll update the cookies type to HashMap with two type parameters for the key type and value type. In this case they're both string slices.

#![allow(unused)]
fn main() {
use std::collections::HashMap;

let raw_cookie_data: String = doc
	.cookie()
	.unwrap_or_default();  

let cookies: HashMap<&str, &str> = raw_cookie_data  
    .split("; ")  
    .map(|kvp_string|{  
        kvp_string.split('=').collect()  
    })    
    .collect();  

leptos::log!("{:?}", key_value_pairs );
}

Now we'll turn our attention to the body of the map function which processes kvp_string

We're starting with this:

#![allow(unused)]
fn main() {
kvp_string.split('=').collect()  
}

This would split a string into an vector of strings, cut at each '=' character.

HashMaps can be made by providing a vector of tuples with key value pairs. We can use split_at to provide a tuple.

#![allow(unused)]
fn main() {
.map(|kvp_string|{  
	kvp_string.split_at(  
	    kvp_string.find("=").unwrap_or_default()  
	)
}
}

Running the map with the above body provides the following results: ("my-key", "=second-value"), ("some data", "=")

Look like we need to do another transformation on these rows. We'll add another map which turns these tuples into tuples that have the first character removed from the values.

#![allow(unused)]
fn main() {
.map(|kvp_tuple|{  
    (kvp_tuple.0, &kvp_tuple.1[1..])  
})
}

We're accessing the first and second elements of the tuple with .0 and .1. By using the spread operator on the second value of the tuple (index 1), we're able to tell the compiler, "take from the 1st position onward," which ignores position 0 of the esequence which is the "=" symbol.

It is worth noting that these two values are wrapped in parenthesis and will be returned as (&str, &str) type.

We can now use HashMap methods to interact with the cookie data.

#![allow(unused)]
fn main() {
leptos::log!(
	"{:?}", 
	cookies.get("my-key").unwrap_or(&"") 
);
}

Here we log the value stored at "my-key" in the cookies HashMap. This returns a Some(&str) type. We can unwrap this to make it either the string slice &str or a reference to an empty string slice, which is also of type &str

The finished code

use leptos::*;  
use web_sys::{KeyboardEvent, HtmlInputElement, HtmlDocument};  
use std::collections::HashMap;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <App />  
        }    })}  
  
#[component]  
fn App(cx: Scope) -> Element {  
    let write_value_to_cookie = |e: KeyboardEvent| {  
  
        let input: HtmlInputElement = e.target().unwrap().unchecked_into();  
        let doc: HtmlDocument = document().unchecked_into();  
        let cookie_key = "my-cookie";  
  
        let cookie_data = vec!(cookie_key, &input.value() ).join("=");  
        doc.set_cookie(&cookie_data);  
  
        // Parse cookie data and log it out  
        let cookie: String = doc.cookie().unwrap_or_default();  
        let key_value_pairs: HashMap<&str, &str> = cookie  
            .split("; ")  
            .map(|kvp_string|{  
                kvp_string.split_at(  
                    kvp_string.find("=").unwrap_or_default()  
                )            
			})            
			.map(|kvp_tuple|{  
                (kvp_tuple.0, &kvp_tuple.1[1..])  
            })
			.collect();  
  
        leptos::log!("{:?}", key_value_pairs.get(cookie_key).unwrap_or(&"") );  
    };  
    
    view! {  
        cx,  
        <div>  
            <input  
                name="cookie_input"  
               type="text"  
              placeholder="Type text and I'll update a cookie!"  
              on:keyup=write_value_to_cookie  
         />  
        </div>  
    }
}

IndexedDB

This lesson is in notes status and is an extremely rough daft. This lesson is not a complete notes draft

What we know

  • Data can be stored in the browser

What we'll learn

  • Basic interaction with IndexedDB

Caveat

The IndexedDB API is complicated. Like all other web storage APIs, they are modifiable by any other scripts running on the page, making them untrusted. Pushing data from WASM to JS and back is slower than performing more work in WASM and letting Leptos update the resulting data. For these reasons, it's probably a better idea to think about your applications and create purpose built data structurs that you can query and work with instead of using the indexedDB API and data store.

The Lesson

IndexedDB is a NoSQL like database that lives in the browser. We are able to grab data by creating a cursor that allows us to jump around the field of data that is the database. IndexedDB does not use structured query language (SQL) to retrieve data like MySQL, Maria, Postgres, etc.

The IndexedDB API is notoriously unconfortable to use. This article will provide a cursory overview and exploration of it for those curious.

Let's get things started with our Leptos inital client side application.

use leptos::*;  
  
fn main() {  
    mount_to_body(|cx| {  
        view! {  
            cx,  
            <App />  
        }    
	})
}
  
#[component]  
fn App(cx: Scope) -> Element {  
    view! {  
        cx,  
        <div>  
            "My App"  
        </div>  
    }
}

We'll jump over to the documentation and scroll down to "Interfaces". This gives us some hints for where we need to go.

The documentation reads: >To get access to a database, call open() on the indexedDB attribute of a window object.

In my mind I think, "I should make a button that will connect to the database when I click it." I find it's easiest to build and learn if I can provide my own input and introspect the result.

Reoccuring Pattern Alert: It's interesting how this feedback loop is the same as a servers request->response, or how applications are built in general. Programs are often the way they are because they're expressions of how we think.

#![allow(unused)]
fn main() {
#[component]  
fn App(cx: Scope) -> Element {  
    let connect_to_database = |_|{  
        leptos::log!("Connect to database")  
    };  
    view! {  
        cx,  
        <div>  
            <button on:click=connect_to_database>  
                "Connect to DB"  
            </button>  
        </div>  
    }
}
}

The documentation states that indexedDB is an attribute of the window object. We'll use Leptos' window() function to grab it's cached reference to window as a web_sys::Window.

#![allow(unused)]
fn main() {
let connect_to_database = |_|{  
	leptos::log!("Connect to database")    
	let window = window();  
};
}

If we look at the web_sys::Window documentation we'll see that there is a web_sys version of the indexedDB javascript proprty called indexed_db. Note the difference in case. Rust is prescriptive about it's use of snake case for function/method names. We can see that the return type is Result<Option<IdbFactory>, JsValue>.

This IdbFactory looks interesting. If we check the IndexedDB documentation we'll see a good definition of what it is.

IDBFactory: Provides access to a database. This is the interface implemented by the global object indexedDB and is therefore the entry point for the API.

This is perfect. We want an entry point for the API!

We can use the following match pattern to get the IdbFactory (note that this refers to the struct with its PascalCase) out of our indexed_db() call, or return (prematurely terminate the closure/click handler) if the pattern doesn't match.

#![allow(unused)]
fn main() {
let idb = match window().indexed_db() {  
    Ok(Some(idb_factory)) => idb_factory,  
    _ => return  
};
}

The web_sys::indexed_db() documentation also states:

This API requires the following crate features to be activated: IdbFactoryWindow

We can enable these feature by adding the following to our cargo.toml file

[dependencies.web-sys]  
features = [ "Window",  "IdbFactory" ]

web_sys has a lot of features and compiling all of them into the Leptos WASM application would make it needlessly large. For this reason, features are hidden behind feature flags like this so that we can pick and choose what gets added to our final application on a needs basis. Rust is very considerate.

Let's zip on over to the web_sys::IdbFactory struct documentation to see what's avaialble to us there. I see an open() method which looks like what we want, so we'll try that. Note that we need to add some extra features to the web-sys crate. open() requires "IdbFactoryIdbOpenDbRequest", so we'll add the missing IdbOpenDbRequest

[dependencies.web-sys]  
features = [ "Window",  "IdbFactory", "IdbOpenDbRequest" ]

I made a few changes to make the code more visible. This is where we're at:

#![allow(unused)]
fn main() {
let connect_to_database = |_|{  
  
    // Grabbed a reference to window  
    let window = window();  
  
    // Got a factory to be able to make an open connection  
    let idb = match window.indexed_db() {  
        Ok(Some(idb_factory)) => idb_factory,  
        _ => return  
    };  
  
    let idb_open_request = match idb.open("my-database") {  
         Ok(idb_open_request) => idb_open_request,  
         _ => return  
    };  
  
    // Do something with the connection  
};
}

The question that we now have is, how do we work with IndexedDB? We need some more information.

Store type

The MDN documentation states that IndexedDB is a key-value store. The values can be complex objects, and keys can be properties of those objects. If we're thinking in terms of Rust, what they're saying is that we can store structs in IndexedDB, where properties like 'id' could be a key used to look up the struct.

Transactions

All interactions with IndexedDB are done in the form of transactions. The mental model is such that we requst for a change in the database and the database will hold the request until it is safe to perform. This guarantees data integrity where we don't have two sources potentially modifying the same data, etc.

There are three transaction types. We'll only be looking at the first two:

  1. readwrite
  2. readonly
  3. versionchange

Data Retrieval

Data is not returned from the database as soon as you request it. Recall that we submit requests to the database as transactions for it to perform. The database decides when it is safe to perform those transactions. For this reason we'll need to provide indexedDB with callbacks to run when the transactions are preformed. We'll be reacting to the retrieval of data. In fact, IndexedDB uses DOM events to notifuy us when resuts are available. It's not dissimilar to how we worked with buttons and click events.

MDN to the rescue

The wonderful Mozilla Developer Network (MDN) has a reference on Using IndexedDB. In it, they outline the basic usage steps as follows:

  1. Open a database.
  2. Create an object store in the database.
  3. Start a transaction and make a request to do some database operation, like adding or retrieving data.
  4. Wait for the operation to complete by listening to the right kind of DOM event.
  5. Do something with the results (which can be found on the request object).

We'll follow along, but in Rust and Leptos.

Where we last left off, we created a web_sys::IdbOpenDbRequest

#![allow(unused)]
fn main() {
let idb_open_request = match idb.open("my-database") {  
     Ok(idb_open_request) => idb_open_request,  
     _ => return  
};  
}

This isn't a connection persey. It is part of the chain of processes to setup a connection.

In the MDN documentation, the author establishes the same type of object in Javascript and then attaches event handlers to onerror and onsuccess.

We'll add callback functions/handlers to our Rust version.

If we look at the definitions of set_onsuccess and set_onerror we'll see that these are the functions that allow us to set the values of the onsuccess and onerror propreties of the JavaScript object. Exactly what we're looking for. Their definitions also tell us that we need to add the IdbRequest feature to our cargo.toml file.

[dependencies.web-sys]  
features = [ "IdbFactory", "IdbOpenDbRequest", "IdbRequest"]

Intuitively I think, "JavaScript accepts functions as the values for these properties, so I should use closures for mine. Though it'll need to be wrapped in a Some because it's an option."

#![allow(unused)]
fn main() {
idb_open_request.set_onsuccess(
	Some(
		||{  
			// on success stuff here 
		}
	)
);  
}

This is all spaced out so that you can easily see the syntax.

Unfortunately, my intuition is off. Even thought the word Function looks familiar, we have to remember that in Rust we have Fn, FnOnce, and FnOnce as function types. Function isn't a native function type. Looking closer I can see that Function is a special struct that is callable by WASM. It is a js_sys::Function.

Creating JavaScript closures in Rust

What we need to do is create a wasm_bindgen::closure::Closure and then cast the closure to a js_sys::Function. The MDN documentation uses a struct with a _closure property. We'll do things slightly differently, stepping through each line of code and what it does.

The first step is making a closure. We'll use the Closure::wrap method. The documentation defines it as:

A more direct version of Closure::new which creates a Closure from a Box<dyn Fn>/Box<dyn FnMut>, which is how it’s kept internally.

This sounds like exactly what we want,

#![allow(unused)]
fn main() {
// cb stands for callback
let cb = wasm_bindgen::closure::Closure::wrap(
	// We need something here.
);
}

We need to provide an argument which is a Box that contains a dyn FnMut. There's a lot here to unpack.

Box

In Rust, there are two types of memory allocation, heap and stack. The stack is fast but requires the size of what's being stored in it to be consistent and known. The heap allows us to store things that may change in size, but they need to be looked up in the stack to get their actual values. It's two steps instead of one. Also, the heap isn't as organized as the stack, so it's lookups will also be slower.

A Box is a way for us to store data in the heap. The actual size of a Box is known, because it's a pointer to memory in the heap.

dyn (dynamic)

Rust wants to know everything in advance to be able to optimize all code and make its security guarantees. If we're going to be skipping across contiquous zeros and ones in memory and interpreting them as data, we need to know the size of the data we're reading.

Unfortunately sometimes this isn't possible to do at compilation time. When we use traits as types in Rust, we're telling Rust that anything that impements the specified trait is fair game for use as an argument. This is a powerful technique because we're allowing any values to be used in the future provided someone writes an impl (implementation) for the given trait.

For example:

#![allow(unused)]
fn main() {
struct RobotDuck{}  

impl RobotDuck {  
    fn assert_duckitude() {  
        println!(
	        "I'm totally not a robot. 
	        Look at me click on these 
	        images of bread floating 
	        in a pond."
		)  
    }
}  
    
struct RealDuck{}  
  
trait Quack {  
    fn quack(&self){  
        println!("QUACK");  
    }  
}  
  
impl Quack for RobotDuck{}  
impl Quack for RealDuck{}  
}

In the above, example, we have two structs with different functions. As a result, they'll look different in memory. Both of these ducks implement the Quack trait and can call the default trait implementation quack().

Let's say we have this function:

#![allow(unused)]
fn main() {
fn this_thing_quacks<T>(quackable: T) where T: Quack {
	println!("This thing quacks!");  
	quackable.quack();  
}
}

We have a trait bound on the generic type T that requires implementation of Quack. When the compiler runs, it will actually create a version of this function for each type that impements Quack.

#![allow(unused)]
fn main() {
fn this_thing_quacks(quackable: RobotDuck){
	//...
}

fn this_thing_quacks(quackable: RealDuck){
	//...
}
}

Recall that functions are also data! Rust needs to have guarantees about the sizes of data as arguments for the function with the generic T.

We'll get an error if we try to change the signature to this though.

#![allow(unused)]
fn main() {
fn this_thing_quacks(quackable: Quack) {
	println!("This thing quacks!");  
	quackable.quack();  
}
}

The reason being is that we don't know what size Quack is when it is being called. There are multiple things that implement Quack and they aren't all the same size! Rust doesn't stamp out the different versions because it hasn't been pre-defined. When we use a trait bound (with the where clause or with <T: Quack>) , Rust's precompilation can prepare the versions for you. When we use the trait object as a type, we defer to runtime checks.

#![allow(unused)]
fn main() {
fn this_thing_quacks(quackable: dyn Quack) {
	println!("This thing quacks!");  
	quackable.quack();  
}
}

By adding dyn we tell the compiler that it will need to look up the data, and an associated table of its functions. If we knew the type at compilation time, we wouldn't need to look up associated functions because they would be known.

Fn / FnMut (Function Trait Objects)

Closures and things that are callable implement one (or more) of three function traits in Rust. They are FnOnce, FnMut, and Fn. Closures that have data moved into them are actually like structs, with properties for their closed over values. For this reason they implement FnOnce (they can only be called once).

The function traits cascade:

  • Fn can be used anywhere an FnMut and FnOnce can
  • FnMut can be used anywhere an FnOnce can
  • FnOnce can only be used where an FnOnce is specified

The rules for what implements the traits are as follows:

  • Fn - Accepts values that are owned or references as arguments
  • FnMut - Accepts values that have mutable references
  • FnOnce - uses move smenatics

This should not be confused with fn which is a function pointer. Function pointers are used to refer to functions whoes identity (and as a result, size) are not known at compilation time. A pointer to a function needs to be used in this case, just like a Box<> gives us a pointer because the contents of a box might not be known.

With all that known, let's go back to the closure we're trying to make.

The parameter type of Closure::wrap is outlined as follows:

A more direct version of Closure::new which creates a Closure from a Box<dyn Fn>/Box<dyn FnMut>, which is how it’s kept internally.

To satisfy this the specification of Closure:wrap let's first add that Box. And in that box we'll put a closure.

#![allow(unused)]
fn main() {
let cb = wasm_bindgen::closure::Closure::wrap(
	Box::new(
		|| {  
		    leptos::log!("Connected ok");  
		}
	)
);
}

The Rust compiler will throw an error here, asking for us to be more specific:

the trait WasmClosure is not implemented for closure [closure@src/main.rs]

We can tell the Rust compiler to treat our box as a specific type (which is will check against) with the as Type statement after the Box's initialization.

#![allow(unused)]
fn main() {
let cb = wasm_bindgen::closure::Closure::wrap(
	Box::new(
		|| {  
		    leptos::log!("Connected ok");  
		}
	) as Box<dyn Fn()->()>
);
}

My hope is that now you'll look at this and read it as follows:

We're creating a closure but it needs to be stored in the heap. The value of the Box which specifies heap storage is a closure which doesn't have any arguments and does't close over any values. We can cast the internal type of the Box as a dyn Fn()->() because it's actual type won't be known until runtime and it accepts no arguments and returns no values (or returns a unit type ()).

#![allow(unused)]
fn main() {
let cb = wasm_bindgen::closure::Closure::wrap(
	Box::new(
		|| {  
		    leptos::log!("Connected ok");  
		}
	) as Box<dyn Fn()->()>
);
}

Note that in a lot of cases we can use the turbo fish to specify type arguments Box::new does not accept any generic as an argument so we need to specify it after the fact.

Let's keep climbing out of this hole back up to where we started, with trying to create a js_sys::Function that we can pass into the connection handler as a reference.

The result of this is whole thing is that we have a closure, but we don't have a reference to js_sys::Function. In fact, we need Some(js_sys::Function).

Here we'll take our callback closure, well get it as a reference, then we'll cast that reference as a js_sys::Function with the turbo fish. And of course, we wrap it all in Some().

#![allow(unused)]
fn main() {
Some(cb.as_ref().unchecked_ref::<Function>())
}

One additional thing that we need to do here, for the sake of Leptos, is to add the following:

#![allow(unused)]
fn main() {
on_cleanup(cx, move || {  
    drop(cb);  
});
}

Leptos has a clean up routine that it runs when a context is closed. We need to move our callback closure, which is actually a handle, to the cleanup function's callback closure.

on_cleanup is being told, "Hey, when cx is cleaned up, run this closure!" In that closure we've moved our callback and passed it into drop(). This means that it'll be cleaned up in WASM's memory, and JavaScript land will prune the closure on its side as well.

Our whole connection callback looks like this:

#![allow(unused)]
fn main() {
let connect_to_database = move |_|{  
  
    // Grabbed a reference to window  
    let window = window();  
  
    // Got a factory to be able to make an open connection  
    let idb = match window.indexed_db() {  
        Ok(Some(idb_factory)) => idb_factory,  
        _ => return  
    };  
  
    let idb_open_request = match idb.open("my-database") {  
         Ok(idb_open_request) => idb_open_request,  
         _ => return  
    };  
  
    let ok_cb = wasm_bindgen::closure::Closure::wrap(  
        Box::new(|| {  
            leptos::log!("Connected ok");  
        }) as Box::<dyn Fn()->() >  
    );  
  
    idb_open_request.set_onsuccess(  
        Some(ok_cb.as_ref().unchecked_ref::<js_sys::Function>())  
    );  
  
    on_cleanup(cx, move || {  
        drop(ok_cb);  
    });  
  
    let error_cb = wasm_bindgen::closure::Closure::wrap(  
        Box::new(|| {  
            leptos::log!("Connected error");  
        }) as Box::<dyn Fn()->() >  
    );  
  
    idb_open_request.set_onerror(  
        Some(error_cb.as_ref().unchecked_ref::<js_sys::Function>())  
    );  
  
    on_cleanup(cx, move || {  
        drop(error_cb);  
    });  
  
    // You are here.  
  
};
}

So, here's where things get interesting. Our database connection is stored in the result property of our IdbOpenDbRequest if our connection was successful. What we want to do is create a signal so that we can store the IDBDatabase on success. It looks like our open database request may need to be used in a few scopes too. We can use Leptos signals to store this data.

#![allow(unused)]
fn main() {
let ( 
	idb_open_db_request, 
	set_idb_open_db_request 
) = create_signal::<Option<web_sys::IdbOpenDbRequest>>( cx, None );  

let ( 
	idb, 
	set_idb 
) = create_signal::<Option<web_sys::IdbDatabase>>( cx, None );
}

It's important that we use Option types here so that we have the ability to set a default value of None.

We'll update our click handler closure with the move keyword, so that these signals will be moved into it when they're used. Keep in mind that signals support the Copy trait, so they'll be copied into the closurs without being moved from the scope they were defined it.

We'll also update the names of some of these variables so that they reflect their types and disambiguate from the new signals. There is a lot of idb this and idb that.

I present, the start of our on click connect to db handler closure/callback:

#![allow(unused)]
fn main() {
let connect_to_database = move |_|{  
  
    let window = window();  

	// Guard assignment
	// idb was renamed to idb_factory
    let idb_factory = match window.indexed_db() {  
        Ok(Some(idb_factory)) => idb_factory,  
        _ => return  
    };  

	// Guard assignment
	// updated to now set the reactive value
    match idb_factory.open("my-database") {  
         Ok(new_idb_open_request) => set_idb_open_db_request
	         .set(Some(new_idb_open_request)),  
         _ => return  
    };
}

We need to update our callbacks for the database connection lifecycle to use our signals as well. In the onsuccess callback, we'll also need to pull the database connection out of the db connection requests result.

#![allow(unused)]
fn main() {
let ok_cb = wasm_bindgen::closure::Closure::wrap(  
    Box::new(move|| {  
        
        leptos::log!("Connected ok");  

		// We'll get the request's value from the reactive system
        match idb_open_db_request.get() {  

			// If it is set we'll use it, referring to it herein
			// as ok_idb_open_request
            Some(ok_idb_open_request) => {  

				// We'll grab the result which in this context will
				// be an idb database. 
                match ok_idb_open_request.result() {  
		            
		            // If the result() was accessible
		            // it'll be a new_idb_connection
                    Ok(new_idb_connection) => {  

						// But this is from JavaScript so we have 
						// to unchecked_into with the Rust type.
                        let new_idb = new_idb_connection
	                        .unchecked_into::<web_sys::IdbDatabase>();  
                        
                        // We'll store this new connection in
                        // Leptos' reactive syste
                        set_idb.set(Some(new_idb));  
                    
	                    // We'll log the result from Leptos'
	                    // reactive system to confirm that it
	                    // worke as planned.
                        leptos::log!("{:?}", idb.get());  
                    },  
                    Err(_) => {}  
                }  
            },  
            None => {}  
        };  
  
    }) as Box::<dyn Fn()->() >  
);
}

The above code is spaced wide and in a verbose syntax so that it is clear.

The rest of the callback contains a our on error handler, and a new onupgrade needed.

#![allow(unused)]
fn main() {
	let error_cb = wasm_bindgen::closure::Closure::wrap(  
	    Box::new(move || {  
	        leptos::log!("Connected error");  
	    }) as Box::<dyn Fn()->() >  
	);  
	  
	let upgrade_cb = wasm_bindgen::closure::Closure::wrap(  
	    Box::new(move || {  
	        leptos::log!("Doing database upgrade or setup");  
	    }) as Box::<dyn Fn()->() >  
	);  
	  
	match idb_open_db_request.get() {  
	    Some(idb_odbr) => {  
	        idb_odbr.set_onsuccess(  
	            Some(ok_cb.as_ref().unchecked_ref::<Function>())  
	        );  
	        idb_odbr.set_onerror(  
	            Some(error_cb.as_ref().unchecked_ref::<Function>())  
	        );  
	        idb_odbr.set_onupgradeneeded(  
	            Some(upgrade_cb.as_ref().unchecked_ref::<Function>())  
	        );  
	    },  
	    None => {}  
	}  
	  
	on_cleanup(cx, move || {  
	    drop(ok_cb);  
	    drop(error_cb);  
	    drop(upgrade_cb);  
	});
}
}

The on upgrade needed will fire when the database needs to be initialized or if the format of the database changes. This is where we will add our initialization code for the type of data stored in the database.

The interesting thing is that onsuccess will happen after the onupgradeneeded. As per the MDN documentation it looks like onupgradeneeded gets passed an event which we can use to extract the event.target.result out of.

// This event handles the event whereby a new version of
// the database needs to be created Either one has not
// been created before, or a new version number has been
// submitted via the window.indexedDB.open line above
// it is only implemented in recent browsers
DBOpenRequest.onupgradeneeded = (event) => {
  const db = event.target.result;

From # IDBOpenDBRequest MDN documentation

We can update our closure with a parameter called event. We've changed the signature and size of the closure, which requres us to update the as Box::<dyn Fn()->()> to match.

#![allow(unused)]
fn main() {
let upgrade_cb = wasm_bindgen::closure::Closure::wrap(  
    Box::new(move |event| {  
        leptos::log!("Doing database upgrade or setup");  
    }) as Box::<dyn Fn(Event)->() >  
);
}

We can't just write Fn(Event) -> () and we need to add the type for the parameter to work with it. So how do we go about finding th type? We can go to the documentation page for the method and take a look at the event type listed. It is stated as IDBVersionChangeEvent. If I look that up but in Rust's required PascalCase IdbVersionChangeEvent I'll find this web_sys::IdbVersionChangeEvent. We do need to add the feature to our cargo.toml as per the documentation as well. The feature is IdbVersionChangeEvent. By now you should be seeing a pattern of how we're progressing through solving this problem. We'll search for javascript examples (as Rust examples are few and far between), and then look up the web_sys equivalents.

Let's update the event with our expected type and the type in the as Box::<> part.

#![allow(unused)]
fn main() {
let upgrade_cb = wasm_bindgen::closure::Closure::wrap(  
    Box::new(move |event: web_sys::IdbVersionChangeEvent| {  
        leptos::log!("Doing database upgrade or setup");  
    }) as Box::<dyn Fn(web_sys::IdbVersionChangeEvent) -> ()>  
);
}

Important: The upgrade callback will only run if the database has not been initialized or if it has a version change where the integer version number is greater than the existing version number. We haven't discussed version changes so for the time being you can change the name of the database to create a new one, always triggering the update callback.

You'll frequently run into issues where you won't know whats the type is from the JavaScript side of things. In this case, we want to work with web_sys::IdbVersionChangeEvent.target() but we don't know what the return type of target() is. What I'll often do is log the value to the browser's console and look for hints for the type that I should cast the value into.

#![allow(unused)]
fn main() {
let upgrade_cb = wasm_bindgen::closure::Closure::wrap(  
    Box::new(move |event: web_sys::IdbVersionChangeEvent| {  
  
        match event.target() {  
            Some(event_target) => {  
                leptos::log!("{:?}", event_target);
            },
			None => {}  
        }    
	}) as Box::<dyn Fn(web_sys::IdbVersionChangeEvent) -> ()>  
);
}

The above logs EventTarget { obj: Object { obj: JsValue(IDBOpenDBRequest) } } to the console. This tells me that I can cast the value by calling unchecked_into::<IdbOpenDBRequest>(). It's important to note two things here; 1) The Rust trait has different casing than the JavaScript object type listed in JsValue; 2) You will likely need to enable the feature for that trait in web-sys.

We'll continue this same pattern to get the result, cast the result, and we'll be left with our database which we can initialize as a store for some form of data:

#![allow(unused)]
fn main() {
let upgrade_cb = wasm_bindgen::closure::Closure::wrap(  
    Box::new(move |event: web_sys::IdbVersionChangeEvent| {  
  
        leptos::log!("Doing database upgrade or setup");  
        
        match event.target() {  
            Some(event_target) => {  
                let open_request = event_target
	                .unchecked_into::<web_sys::IdbOpenDbRequest>();  
                match open_request.result() {  
                    Ok(newly_opened_idb) => {  
                        let newly_opened_idb = newly_opened_idb
	                        .unchecked_into::<web_sys::IdbDatabase>();  
                        // Do things here  
                    }  
                    Err(_) => {}  
                }
			}, 
		   None => {}  
        }    
	}) as Box::<dyn Fn(web_sys::IdbVersionChangeEvent) -> ()>  
);
}

Recall that we can use web_sys::IdbDatabase beacuse web_sys is brought into scope via use leptos::*

Structuring the database

We now have a newly opened database in newly_opened_idb. We need to setup some tables in the database. IndexedDB doesn't use tables though. They use object stores. Each object in an object store is associated with a key. An object store can use a key path (you tell it how to source the key from the object being stored, key_path) or from a key generator (auto_increment). Object stores can contain objects and primitive data. If they contain objects, they may also have indexes which can enforce specific rules, and make queries faster.

Writing data

Querying data

Creating stores and indexes as storage bounds

Server

What is a server and client. Why does the divide exist?

An analogy to understand the server and client

  • Wear house of tools that we load up into our trucks before going to a job site.
  • We have two job sites.
  • There are some tools that we use at one and not the other
  • There are some tools that we bring to both
  • There are some tools that we call the same thing at both, but the way they function might be different. We can give instructions that talk about using the same named tool at both job sites. This is whats we're referring to when we mean #isomorphic.

Isomorphic: corresponding or similar in form and relations. #isomorphic

The analogy applied to web technology

Our Rust project's src folder is the wear house in our analogy. The server and client are two job sites where work (computation) will be done. The trucks being sent to the job sites full of purpose selected tools and instructions are distributions. In the case of Rust we compile a binary for distribution (sending to) the server and we compile WASM as part of the distribution for the web client/browser. I say part of because we include a few other static files to help setup our distribution at that job site (load our WASM in a browser).

Where things get interesting is that the server is like a closed job site. There's a fence around it. The client is like an open job site. It's less open thanks to WASM being compiled, but it's still a place where we invite the public (users) to participate. The client job site might need some prefabricated constructions that only the closed job site can create. It's able to make those requests, but the closed job site will take those requests and vet them first before passing them to their workers. The client job site will have to wait for what they need. This exemplifies the request response pattern of server/client. In the case of our application, the public is able to request

Our server is built in Rust. It

and runs as a binary executable on a computer in a wear house or as an executable in a network of computers like Cloudflare workers.

Cargo Leptos

Full stack development with Cargo Leptos

Official documentation for Cargo Leptos

Introduction

Cargo Leptos is a build tool for Leptos. When you setup a project with Cargo Leptos you'll receive a web server which is setup to handle web requests, including responding to certain web requests with your Leptos client side ui. It makes building applications where a front end communicates with a back end (full stack) a piece of cake.

A full set of features can be found in the official documentation. You'll find a lot of the conveniences you maybe have come to expect, or which have become industry standard for UI framework build tools.

Installing Cargo Leptos

Cargo Leptos is a stand alone application which you will need to install on your computer. You will need to have cargo preinstalled. If you do, enter the following command in your shell/terminal.

cargo install cargo-leptos

You can confirm the installation by checking the active version.

cargo leptos -V

Dependencies

You may need to us nightly rust. You can set the default with:

rustup default nightly

You may be required to install the rust-up wasm target:

rustup target add wasm32-unknown-unknown

You should switch to nightly before installing the target wasm target,

Setting up a new application

You can initialize a new axum project with:

cargo leptos new --git https://github.com/leptos-rs/start-axum

You'll be asked to enter a project name.

You can then change directory cd into the project folder and run yur app.

cd my-project-name
cargo leptos watch

The above bash script should print something like the following to your terminal:

 Finished dev [unoptimized + debuginfo] target(s) in 0.62s
       Cargo finished cargo build --package=start-axum --lib --target-dir=target/front --target=wasm32-unknown-unknown --no-default-features --features=hydrate
    Finished dev [unoptimized + debuginfo] target(s) in 0.67s
       Cargo finished cargo build --package=start-axum --bin=start-axum --target-dir=target/server --no-default-features --features=ssr
      Notify watching folders public, style, src
listening on http://127.0.0.1:3000

This means that we can now visit the url 127.0.0.1:3000, and see our placeholder Leptos app. Leptos will use port 3001 by default to watch for updates.

Adding HTTPS for local development

Axum, inside Cargo Leptos' setup, will attempt to stream content from the server to the client, especially in situations where suspense components are used. Streaming is only supported with HTTP/2, which is only available through TLS (via https).

We can use a program called Caddy to create a reverse proxy. This will allow you to visit a site on your computer at https://leptos.localhost, and have requests/responses forwarded through that boundary to the Leptos server.

Setting up a reverse proxy is pretty easy.

Step 1 is to install caddy from https://caddyserver.com or from brew install caddy if you are on OS X and have homebrew installed

Step 2 is to create a Caddyfile in your project folder. The file has no extension.

leptos.localhost { 
	reverse_proxy 127.0.0.1:3000 
}

Step 3 is to start the caddy server:

caddy run

File Structure

The following outlines the folder structure of our Cargo Leptos project and the purposes of each folder to provide an overview of what goes where and why.

  • /public: Contains static assets that will be served. Will be copied to path set in the config for site-root.
  • /style: Contains scss files that will be processed and written to the location of two concatenated configs ( site-root + site-pkg-dir ). Recall that site-pkg-dir is relative to the site-root.
  • /src: The rust source of your application.
    • main.rs: The file with main functions as entry points for the server and client applications. This file configures and starts our server with our Leptos app as a service to handle requests.
      • #[cfg(feature=ssr)]: A macro that prefixes code that will only be compiled for the server side application binary (the app).
      • #[cfg(not(feature=ssr))]: A macro that prefixes code that will only be compiled for the client side application. The client side application and its javascript will be written to the site-root/site-pkg-dir.
    • app.rs: This file is where we setup our main app component which contains routes and specifies what ends up in responses to client requests. From here it's just a matter of creating Leptos components as views for paths and building out your app.
    • lib.rs: Cargo Leptos supports hydration. This allows us to serve (via SSR) minimal content to the client. It's usually a skeleton or shell of a ui. Hydration is the act of adding the substance to the shell. We "hydrate" it and bring it to life. The purpose of this is to reduce the initial response time. The hydration function in lib.rs is used as WASM on the client to request a hydrated version of a response once it's loaded. Leptos knows when it is or is not in hydration mode, allowing it to serve the shell ui and populated/hydrated ui separately.
    • error_templates.rs: Contains Leptos Views that will serve as responses for axum errors like 404 for missing endpoints/urls.
    • fileserv.rs: Serves static files if ssr is enabled
  • /end2end: Contains end to end tests and playright config

Configuring Cargo Leptos

Cargo.toml

Configuration for Cargo Leptos can be done in the cargo.toml. We do this by adding package metadata to leptos.

[package.metadata.leptos]

# The name used by wasm-bindgen/cargo-leptos for the JS/WASM bundle. Defaults to the crate name

output-name = "start-axum"

# The site root folder is where cargo-leptos generate all output. WARNING: all content of this folder will be erased on a rebuild. Use it in your server setup.

site-root = "target/site"

# The site-root relative folder where all compiled output (JS, WASM and CSS) is written

# Defaults to pkg

site-pkg-dir = "pkg"

# [Optional] The source CSS file. If it ends with .sass or .scss then it will be compiled by dart-sass into CSS. The CSS is optimized by Lightning CSS before being written to <site-root>/<site-pkg>/app.css

style-file = "style/main.scss"

# Assets source dir. All files found here will be copied and synchronized to site-root.

# The assets-dir cannot have a sub directory with the same name/path as site-pkg-dir.

#

# Optional. Env: LEPTOS_ASSETS_DIR.

assets-dir = "public"

# The IP and port (ex: 127.0.0.1:3000) where the server serves the content. Use it in your server setup.

site-address = "127.0.0.1:3000"

# The port to use for automatic reload monitoring. Make sure this port is not the same as the port used in site-address.

reload-port = 3001

# [Optional] Command to use when running end2end tests. It will run in the end2end dir.

# [Windows] for non-WSL use "npx.cmd playwright test"

# This binary name can be checked in Powershell with Get-Command npx

end2end-cmd = "npx playwright test"

end2end-dir = "end2end"

# The browserlist query used for optimizing the CSS.

browserquery = "defaults"

# Set by cargo-leptos watch when building with that tool. Controls whether autoreload JS will be included in the head

watch = false

# The environment Leptos will run in, usually either "DEV" or "PROD"

env = "DEV"

# The features to use when compiling the bin target

#

# Optional. Can be over-ridden with the command line parameter --bin-features

bin-features = ["ssr"]

# If the --no-default-features flag should be used when compiling the bin target

#

# Optional. Defaults to false.

bin-default-features = false

# The features to use when compiling the lib target

#

# Optional. Can be over-ridden with the command line parameter --lib-features

lib-features = ["hydrate"]

# If the --no-default-features flag should be used when compiling the lib target

#

# Optional. Defaults to false.

lib-default-features = false

Error: Address already in use

Interrupting Cargo Leptos may result in the port being bound after Cargo Leptos terminates. Using cargo leptos watch again will yield an error message stating:

thread 'main' panicked at 'error binding to 127.0.0.1:3000: error creating server listener: Address already in use (os error 48)', 

You will receive the same error if your site-address is using the same port as the reload-port.

Cargo Leptos main.rs

The server is run from src/main.rs . In this file we'll see macros like #[cfg(feature = "ssr")]. These macros tell Rust to only include the following function if the "ssr" feature is enabled. SSR stands for "Server Side Rendering". This is what our web server is doing. You may have noticed that the ready message for Cargo Leptos has "--feature=ssr" in it.

Server only use statements

At the very top we'll see a set of use statements wrapped in a macro that roughly reads as "if ssr is enabled, do the following."

#![allow(unused)]
fn main() {
cfg_if::cfg_if! { if #[cfg(feature = "ssr")] {
	//...
}
}

Here we're including the external crates required for the Axum web server.

Server only main function

Next we'll see our main function. Above it we have a feature macro which only allows the compiler to see this function if the ssr feature flag is set.

#![allow(unused)]
fn main() {
#[cfg(feature = "ssr")]
}

Async server only main function

Then we have another curious macro:

#![allow(unused)]
fn main() {
#[tokio::main]
}

Tokio is an asynchronous (async) runtime for Rust. This macro initializes tokio as the software thats will handle the implementation of functions using the async keyword.

To make sense of this we'll need to go over the difference between synchronous and asynchronous code. Code that is synchronous runs sequentially. One piece of code runs after the next, in order. This is very predictable and easy to reason about because there is a linear timeline. There are problems with synchronous code though. It can only do one thing at a time. If we had a web server we would only be able to handle one response at a time. What we would like to do is handle responses as they come in, whenever they come in. Responses should be handled out of sync, or async. ^.^

Thinking about synchronous and asynchronous code involves a bit of a paradigm shift. A change in the way you think about code.

Here's an example of a synchronous process written as simple instructions:

Count to five and make a sandwich
Eat the sandwich when it's ready

In synchronous code the result of these instructions would look like this:

1,2,3,4,5
(started making a sandwich)
(finished making a sandwich)
(consumed the sandich)

Asynchronous code would yield something more like this:

1,
(started making a sandwich)
2,
3,
4,
(finished making a sandwich)
(consumed the sandwich)
5,

This result occurs because these tasks are running in parallel. We're counting AND making the sandwich, asynchronously.

In many languages we would call these promises. We don't have a sandwich, but we have a stand in which is a promisory value that it will become a sandwich. In rust, we call these Futures. The result of calling an async function is a struct that implements the Futures trait. Which is to say, a struct that has a set of methods we are guaranteed to be able to call.

Two approaches are commonly used when dealing with asynchronous code.

  1. Callbacks: We don't know when an async function will conclude. In order to use the data that results from it, or to do something upon completion, we provide a callback. A function to be run on completion/failure/resolution of the async function. This is done through the then() method, which is part of the Futures trait.
#![allow(unused)]
fn main() {
// Get a copy of me
let hungry_me = get_the_author();
// imagine that make_sandwich was an async function
make_sandwich() 
	.then(      
		move |sandwich| hungry_me.eat(sandwich) 
	);	
// This line will run while the sandwich is being made
}
  1. Awaiting: We can wait for the async function to complete, pausing this part of the program and letting it resume when it's complete. This allows async code to read as synchronous code. Awaiting happens inside other async tasks. Now om_nom_nom can run and take the time it needs without holding up the rest of the application.
#![allow(unused)]
fn main() {
async fn om_nom_nom() {
	let hungry_me = get_the_author();
	let sandwich = make_sandwich().await();
	hungry_me.eat(sandwich);
}
}

Why do we need tokio?

There is an async runtime built into the browser which can be taken for granted when working on the front end in Javascript. The needs of how async tasks are handled for a systems language are varied. If the Rust language developers got it wrong, we'd be stuck with it. To safe this they've said, "Hey, we're going to tell you which syntax to use when you write async code. We'll tell you which methods you can call on Futures with the Trait specification, but we're not going to tell you how it'll work. You'll need to bring your own implementation." This gives us huge freedom because we can decide how async code actually runs! Tokio is handling the implementations of those traits for us. Very rad.

Inside the server main function

We'll see a few main components inside the main function

  1. Initialization of a logger that gives nicely formatted and timestamped console log output.
  2. Setup of the server's config
  3. Creation of our application
  4. Injection of our application into the axum server so that our app will be used to handle the processing of requests to generate responses. Axum handles the nitty gritty of preparing the requests and formatting the final parts of the response so that we can focus on our application specific components such as updating state and rendering ui.

1) Simple Logger

Simple logger allows us to call nice log messages like this:

#![allow(unused)]
fn main() {
log!("listening on http://{}", &addr);
}

2) Server config

We're calling get_configuration() with an argument of None which will load the configuration from settings from Cargo Leptos. We could also replace None with Some("./cargo.toml") to load the settings we've established in our cargo.toml. In the example you'll see them listed under the [package.metadata.leptos] heading.

leptos_options is a property on the conf that was setup using the cargo.toml settings. We need to pass the options to a few places so we've assigned it to it's own variable. We've also cloned the site address so that we can pass it around without it being a reference into a struct. If we didn't clone it, Rust would likely complain about leptos_options having moved but we have a reference to it which is going elsewhere (like in the log message and used to bind the server to the address).

We generate the routes list from a view which contains our App Leptos component. If you peek into /src/app.rs you'll see the declarative way routes are listed. We'll get into that later. Just know that this function is reading our declarative routes and turning them into routes that the web server (Axum) can use.

3) Setup our app as a service for axum

Our app will be a service which axum calls upon to handle and provide responses to requests. Axum's Router is the starting block for building this out. We initialize a new router. Then we append a route on it which is specially designed to work with Leptos. We say that axum will have routes for api/function_name which uses the function name to connect with our Leptos server functions. This is achieved through the leptos_axum integration module's handle_server_fns.

The leptos_axum integration adds a method to the axum::Router struct which allows us to configure the axum::Router with leptos config. That's what's happening here:

#![allow(unused)]
fn main() {
.leptos_routes(leptos_options.clone(), routes, |cx| view! { cx, <App/> })
}

The fallback allows us to serve the error_template component from /src/error_template.rs if a route couldn't be found to serve. The internals of fallback are also responsible for serving static files like WASM and JS, because those aren't routes either.

And finally, layer allows us to create an extension with an arc (Asynchronous Reference Counted) value. The extension allows us to create something that all responses can see. The arc allows us to make it safe to pass the data across threads (required for axum). In the arc we place the leptos_options.

4) Start the web server

In this last step we bind the address to the axum server and tell it to serve our app, which we convert into a service for axum to use. We await it so that it continues to run until the process shuts down. This prevents the main function from hitting the end of its scope and concluding the applications running.

Client side main function

At the bottom of main.rs you'll see the following:

#[cfg(not(feature = "ssr"))]  
pub fn main() {  
    // no client-side main function    
    // unless we want this to work with 
    // e.g., Trunk for pure client-side testing    
	// see lib.rs for hydration function instead
}

The following macro tells Rust to include this function if we're not in ssr (server) mode, which is denoted by the ssr feature being disabled. That would occur if we're compiling for the client side. In this setup we will only be looking at server side handling/routing.

Cargo Leptos app.rs

Overview

app.rs is the main entry point into Leptos application.

We've brought Leptos, Leptos meta, and Leptos router into scope:

#![allow(unused)]

fn main() {
use leptos::*;  
use leptos_meta::*;  
use leptos_router::*;

}

And we define two components, App and HomePage

#![allow(unused)]
fn main() {
#[component]  
pub fn App(cx: Scope) -> impl IntoView {
	//...
}

#[component]  
pub fn HomePage(cx: Scope) -> impl IntoView {
	//...
}
}

App Leptos Component

The App component is like our fn main() but for Leptos. It's the top level component that we pass into our server setup.

Provide Meta Context

The first thing we do in our app is setup the meta context.

#![allow(unused)]
fn main() {
provide_meta_context(cx);
}

This allows us to embed meta data and attach resources to the page that should be included in between the <head>...</head> tags of the server's response. Non-meta information will be included in between the response's <body>...</body> tags.

App component view

Our App component returns impl IntoView. We satisfy this component return requirement by creating a view with our view! macro. You can think of this as the top level view. It's like a wrapper around your application.

#![allow(unused)]
fn main() {
view! {  
    cx,  
  
    // injects a stylesheet into the document <head>  
    // id=leptos means cargo-leptos will hot-reload this stylesheet    
    <Stylesheet id="leptos" href="/pkg/start-axum.css"/>  
  
    // sets the document title  
    <Title text="Welcome to Leptos"/>  
  
    // content for this welcome page  
    <Router>  
        <Routes>  
			<Route path="" view=|cx| view! { cx, <HomePage/> }/>  
		</Routes>  
    </Router>  
}
}

Global meta

The first two aspects of this view are tags that will be moved into the <head> tags thanks to leptos_meta. Leptos meta will always pluck out meta tags and put them in the <head> for us!

Global router

The second component we have in our App view is a Router component with a set of Routes. This is the format that we use to declaratively specify route in Leptos. The server integration is able to pull these components out and use them to setup axum for us. The views for each route can be any valid view. You can put a page view style component like the HomePage component, or you can actually just add html into the view macro and write the whole thing inline. I wouldn't recommend it, but you could.

It's also possible to add meta tags which will override the global meta that we set above.

#![allow(unused)]
fn main() {
view! {  
    cx,  
  
    // injects a stylesheet into the document <head>  
    // id=leptos means cargo-leptos will hot-reload this stylesheet
    <Stylesheet id="leptos" href="/pkg/start-axum.css"/>  
  
    // sets the document title  
    <Title text="Welcome to Leptos"/>  
  
    // content for this welcome page  
    <Router>  
		<Routes>  
			<Route path="" view=|cx| view! { cx, <HomePage/> }/>  
			<Route path="/hi" view=|cx| view! { cx, 
				<Title text="Hi"/><h1>"Hello"</h1> }
			/>  
		</Routes>  
    </Router>  
}
}

Note how in our hi route, we set the response (the page) title to "Hi" and output plain HTML.

HomePage Component

#![allow(unused)]
fn main() {
/// Renders the home page of your application.  
#[component]  
fn HomePage(cx: Scope) -> impl IntoView {  
    // Creates a reactive value to update the button  
    let (count, set_count) = create_signal(cx, 0);  
    let on_click = move |_| set_count.update(|count| *count += 1);  
  
    view! { cx,  
        <h1>"Welcome to Leptos!"</h1>  
        <button on:click=on_click>"Click Me: " {count}</button>  
    }}
}

It's common to have a component per route. This keeps the router clean and easy to read. It also encapsulates all of that view into a single area. At this point you should feel at home. From here it's a matter of creating views and building out your application.

We'll explore server/client interaction as we progress through the lesson, but now you're up and running with a full stack completely typed web application! Hooray!

Leptos Meta

There are aspects of a server's HTML response that don't exclusively live in between the <Body> tags of the document. It is important that we're able to modify those parts of the response so that we can change a page's title, embed styles and scripts, and so forth.

leptos_meta is an external crate that Cargo Leptos uses to manage response meta data or associated page data. It will look at the UI generated from a page's views, pulling out special tags, and moving them to where they need to be.

For example, let's say you had a blog post page. You can use the <Title /> tag to set the page's title at the same point where you output the <h1>...</h1> with the actual title of the blog post that readers will see.

There are a few behaviours of meta data or associated resources that are managed by leptos_meta:

  1. Fixed document tags that leptos_meta will update outside of the <Head> content.

    • <Html />
    • <Body />
  2. Content that will be hoisted from components and placed or updated in the <Head>

    • <Link>
    • <Meta>
    • <Script>
    • <Style>
    • <Stylesheet>
    • <Title>

Official documentation for leptos_meta

Available tags

The following tag are available for use with leptos_meta. A full list of their properties (required and optional) can be found in their respective linked documentation.

Fixed document tags

<Body />

#![allow(unused)]
fn main() {
<Body class="cool-body-class"/>
}

<Html />

#![allow(unused)]
fn main() {
<Html lang="he" dir="rtl"/>
}

Head tags

Note: All of these examples are expected to be written in the context of the view! macro's template input with the exception of the formatter, which is generated outside of the view macro for the <Title>

<Link>

#![allow(unused)]
fn main() {
<Link 
	rel="preload"
	href="myFont.woff2"
	as_="font"
	type_="font/woff2"
	crossorigin="anonymous"
/>
}

<Meta>

#![allow(unused)]
fn main() {
<Meta charset="utf-8"/>
<Meta name="description" content="A Leptos fan site."/>
<Meta http_equiv="refresh" content="3;url=https://github.com/leptos-rs/leptos"/>
}

<Script>

#![allow(unused)]
fn main() {
<Script>
	"console.log('Hello, world!');"
</Script>
}

<Style>

#![allow(unused)]
fn main() {
<Style>
	"body { font-weight: bold; }"
</Style>
}

<Stylesheet>

#![allow(unused)]
fn main() {
<Stylesheet href="/style.css"/>
}

<Title>

#![allow(unused)]
fn main() {
let formatter = |text| format!("{text} — Leptos Online");

<Title formatter/>

// .. or ..

<Title text="The title of my page"/>

}

Leptos Router

Official documentation for leptos_router

Location as state

Our web apps start in an initial state. They load and present us with a default user interface. In the context of these lessons, that default interface would come from the view! macro result returned by our App component. Recall that the App component is passed into our axum serve integration and serves the same purpose for Leptos as fn main() does for a rust binary (application).

The way we change the state of our application is by interacting with it. Intuitively this might seem obvious. We move our mouse, click on something press some keys, and if our application has deemed those events as "state changing interactions" then our state would update accordingly. What might seem less intuitive is thinking of navigating to a different page as a state change as well.

For example clicking on a link to go to an about page changes the state of the application from "viewing the default" to "viewing about". It's almost as if the context of your future interactions have changed. It can be very tempting to think about paths like /about as a location or file. I encourage you to think about it as a location in a state graph. Your application is running the same main program to handle all requests. /about is not a different program. It is your program in a /about state.

This is a very powerful mental model that is seldom discussed, but which will serve you well; The user interface that you are seeing is a function of your application with events applied.

In Leptos you can think of routes as conditional statements that act over the current uri (location that you are at in the website with query variables). My hope is that this mental model will serve you well as you use paths/locations as routes to slice up states of your application. If done well, your application will have discreet states with well established guards between those states, which will make your application ui development an absolute joy to develop.

Router

leptos_router is the external Leptos module that facilitates switching over which components are displayed to the user based on the location of a web request. This location may come from client side routing or server side routing thanks to the routers flexibility.

  • Client side routing: A user clicks on a link and the browser's history receives a new entry for the new page. The WASM module will render a new UI for the give location, but this ui is generated purely on the client. A new page is not requested from the server because the client is able to route requests or conditionally show content based on the active history. interesting the browser's javascript call to add a new location to its history is called pushState. Like I said, location is state!

  • Server side routing: A user clicks on a link with a specific path. The path is added to the history and a request is made to the server. Information about the request like the requester's IP, browser, cookies set for the domain of the application, and query variables (variables specified after the ? character in the request path) as sent with the request. The server looks at the request and make a judgement about what to return given the state of the application set by that input. The response is a function of the request! Additionally, a user may click submit on a form, sending form data as part of the other request info to the form's action url. It's like clicking on a link, but adding extra data along with it. The action url, like a web link, may also have query variables. If the form's submit method is "GET", all values will be appended to the action url as query variables. If the submit method is "POST" the values will be sent as part of the web request as the aforementioned form data, leaving the action url untouched.

Setting up routes

Leptos routes are setup by creating one <Router>...</Router>. Within the router is a <Routes> container which may have one or more <Route> components. And again, all of these are places in the top level components view! macro template.

Each <Route> component has the following properties:

  • path: A string that identifies the location. Tokens can be created to extract values from the location by prefixing the name of the token with a colon :.
  • view: A closure that returns a value that implements IntoView, like the result of a view! macro. The value of the view property must be a closure so that it can be executed as required.
#![allow(unused)]
fn main() {
<Router>
	<Routes>
		<Route path="/" view=move |cx| {
		  view! { cx, "This is the home page" }
		}/>
		<Route path="/about" view=move |cx| {
		  view! { cx, "This is the about page" }
		}/>
		<Route path="/articles/:id" view=move |cx| {
		  view! { cx, "This is a view for an article with 
				the id extracted from whatever is after 
				`/articles/` in the uri" 
		  }
		}/>
	</Routes>
</Router>
}

Route components

A single route can be extracted and turned into a route component by using the modified component macro #[component(transparent)]

#![allow(unused)]
fn main() {
use leptos::*;
use leptos_router::*;

#[component]
pub fn App(cx: Scope) -> impl IntoView {
	view! { cx,
		<Router>
			<Routes>
				<Route path="/" view=move |cx| {
				  view! { cx, "This is the home page" }
				}/>
				<Route path="/about" view=move |cx| {
				  view! { cx, "This is the about page" }
				}/>
				<ArticleRoutes />
			</Routes>
		</Router>
	}
}

#[component(transparent)]
pub fn ArticleRoutes(cx: Scope) -> impl IntoView {
	view! { cx,
		<Route path="/articles" view=move |cx| {
		  view! { cx, "A list of available articles"}
		}/>
	}
}
}

Carrying route views forward with nested routes

Let's imagine that you have an online magazine. Each magazine is an issue with a cool header graphic and links to the articles in that issue. This header carries across all of the articles. We'd like it to stay consistent instead of re-rendering every time. Nested routes allow us to do just that. A nested route will match the next part of a uri. The matching route's view will be displayed in the parent route's <Outlet /> tag.

Here's an example of the structure.

#![allow(unused)]
fn main() {
view! {  
    cx,  
    <Router>  
        <Routes>  
            <Route path="" view=move |cx| view!{cx, "App index page"} />  
  
            <Route path="/a-parent-route" view=move |cx| view!{cx,  
                <h1>"A parent route"</h1>
                <p>  
                    "A heading for the child routes.  
                    The active route will replace the   
                    Outlet tag"  
                </p>  
                <fieldset>  
				    <legend>"Outlet"</legend>  
					
					<Outlet/>  
					
				</fieldset>  
            } >  
            
                <Route path="/" view=move |cx| view!{cx,  
                    <p>"The parent page's default Outlet content"</p>
				}/> 
				 
                <Route path="/sub-route" view=move |cx| view!{cx,  
                    <p>"The parent page's Outlet content will 
	                 be populated by this if we navigate to 
	                 /a-parent-route/sub-route"</p>
                } />  
  
            </Route>  
        </Routes>  
    </Router>  
  
}
}

Simple links will work for navigating throughout your site/app as you would expect.

#![allow(unused)]
fn main() {
view! { cx,
	<a href="/about">About</a>
}
}

Forced navigation can be achieved by calling the navigate function.

#![allow(unused)]
fn main() {
let navigate = use_navigate(cx);

view! { cx,
	<button on:click=move |_| { _ = navigate("/about", Default::default()); }>
}
}

Accessing state/route params

A view can extract values of parameters by using the view's supplied Scope cx and the use_params_map function. The function returns a memo<ParamsMap>. Memo is short for memoize. It's a type of cache value that allows us to access the data without computing it over each time. A ParamsMap as the method get(key: &str) which returns an optional value. We want to clone it so that we can safely use it going forward, peeling off the reference. We then unwrap the result of cloned, because cloned() returns an option type. When we write .unwrap_or_default() we're writing , "Use the value of thing from Some(thing), or whatever the default method returns for the type that we're using." At this spoint we want to parse whatever that was into a string.

#![allow(unused)]
fn main() {
<Route path="/:some_token_name" view=move |cx| {  
	let params = use_params_map(cx);  
	let uri_param: String = params()  
		.get("some_token_name")  
		.cloned()  
		.unwrap_or_default();

	view! { cx, {uri_param} }  
}  
/>
}

String as a uri_param type is not necessary, but I added it so that you can see which type uri_param is.

You may have specific data types that you want to use which you can parse from the string value provided as the param.

Get Method Forms (Form to Query String)

Submitting requests to a server (via url)

The internet works on a basis of request/response. We ask for something, and a server responds (or doesn't, which is still a response of sorts). When we enter a URL in our browsers, a request for that resource is sent to the server handling requests for the domain. This is an actual computer or network endpoint somewhere on the web. The DNS system looks up the address of that computer and sends the request on down to that actual address. If all goes well, the server will respond with a new page, static resource, etc—what you asked for.

We can configure those requests by adding query string variables after the url, separated by a question mark (?). Each parameter and argument pair—often called a key/value pair—in the query string is separated by an ampersand (&). Query strings can not have spaces or certain special characters. You can imagine how problematic it would be if the value of one of the query string parameters had an ampersand in it. It's customary to encode those special characters for use as query string values/arguments. You'll have seen this all over the web. If you see %20, that is the encoded value of a space. Interestingly, they're called query strings because it is adding specificity to our resource (response) query (what we're asking for).

For example: https://some-non-existant-store.com/catalog?page=1&per_page=12&title=Cool%20Products

This is the most common way that we make requests online. Adding a url in an image source, entering a url into your web browser, linking a css file, they all use this same approach. Take the request and respond with a resource.

Submitting requests to a server (via forms)

Important: The following uses <form> the HTML Tag. This should not be confused with <Form> the Leptos component.

Get method

It is possible to submit requests to servers while allowing user input. Forms are the foundational web tool for doing this. We do this by authoring a form with <form> ... </form> tags, and setting the action property on it to the url that will process the request. It's where the form will be sent. One of the methods forms can use is called get which will embed the form's fields to the action as a query string. You can think of the form field ids as the parameters, with their values as the arguments.

For example:

<form 
	  action="https://some-non-existant-store.com/catalog" 
	  method="get"
>
	<input type="number" name="page" value="1" />
	<input type="number" name="per_page" value="12" />
	<input type="text" name="title" value="Cool Products" />
	
	<inpt type="submit" value="Submit request">
</form>

Clicking the submit button would send the input field values a query string appended to the action url.

The benefits of using get:

  • URLs are visible
  • URLs are stored in your history with the query string allowing for navigation to and from the submitted form's response

Drawbacks of using get:

  • URLs are visible and make data that is submitted public
  • Data is stored in the history and can be introspected
  • URLs have a maximum length, limiting the amount of data that can be sent

Post method

Changing the method of our form to post will take us to the same url as written in the action property's value. However, the post method will collect the data and submit it as a payload with the url to the server. Post does not append the form's values to the request as a query string.

Benefits of using post:

  • Data is hidden from the history
  • Data doesn't pollute the url
  • Complex and multipart data can be sent

Drawbacks of using post:

  • When pressing reload, it's possible to resubmit the form

Why forms are an important part of the web platform

Direct resource requests and form submissions are the two main ways that we can use the web platform to submit requests and retrieve new data from servers. An important thing to note is that forms work out of the box. We do not need to iterate over fields, extract values, and then submit some javascript event. They just work, even with Javascript disabled! I encourage you to get comfortable with forms and what we're about to learn here. Forms, though old technology, will make your applications more accessible, robust, and depending on how you load your application, faster to load the initial useful render.

Starting with a simple form

We'll build out a new Leptos application using cargo leptos. See Cargo Leptos Setup for details.

We'll modify our src/app.rs file to scaffold out two routes. One that holds our form and the other that holds the page loaded in response.

#![allow(unused)]
fn main() {
use leptos::*;  
use leptos_router::*;  
  
#[component]  
pub fn App(cx: Scope) -> impl IntoView {  
    view! {  
        cx,  
        <Router>  
            <Routes>  
                <Route 
	                path="" 
		            view=|cx| view! { cx, <FormPage/> }
				/>  
                <Route 
	                path="/form-action-page" 
	                view=|cx| view! { cx, <FormActionPage /> }
				/>  
            </Routes>  
        </Router>  
    }}  
  
#[component]  
fn FormPage(cx: Scope) -> impl IntoView {  
    view! { cx,  
        <form action="/form-action-page">  
            <input type="submit" value="Send request" />  
        </form>  
    }
}  
  
#[component]  
fn FormActionPage(cx: Scope) -> impl IntoView {  
    view! { cx,  
        <h1>"You submitted a form and we ended up here!"</h1>  
    }
}
}

If you run this this with cargo leptos watch and click on the submit button you'll be taken to the /form-action-page. Note that it's listed as the form's action. This is our simple proof of how forms traditionally work. Note that if no method is set, method=post will be assumed.

Let's update the form page to have some data.

#![allow(unused)]
fn main() {
#[component]  
fn FormPage(cx: Scope) -> impl IntoView {  
    view! { cx,  
        <form action="/form-action-page">  
            <input type="text" name="secret" placeholder="Tell me a secret" />  
            <input type="submit" value="Send request" />  
        </form>  
    }
}
}

Get method

When a route gets processed, the route selected becomes part of the context. There are a set of helper functions that facilitate grabbing parts of the request for us to work with. As the query string is part of the request, we can look to leptos_router::use_query_map as a tool to extract query values.

We'll start by using use_query_map to get a memoized query map from the context. Memoization can be thought of as a cache that says, if we've retrieved the value once, just serve that value again instead of deriving/calculating it all over again.

#![allow(unused)]
fn main() {
fn FormActionPage(cx: Scope) -> impl IntoView {  
    let qm : Memo<ParamsMap> = use_query_map(cx);
    // ...
}
}

We need to get the value from the memoizing container.

#![allow(unused)]
fn main() {
let pm : ParamsMap = use_query_map(cx)
    .get();
}

But now we have a ParamsMap which we can try to get our query value from by providing it with the key.

#![allow(unused)]
fn main() {
let maybe_secret : Option<&String> = use_query_map(cx)
    .get()
    .get("secret");
}

And then of course we need to unwrap the string or provide a default if it doesn't exist.

#![allow(unused)]
fn main() {
let maybe_secret : Option<&String> = use_query_map(cx)
    .get()
    .get("secret");
	.unwrap_or("No secret was provided");
}

The above will not work though. The value that we've provided for the unwrap_or is a string slice &str. The return type of getting the secret is a &String. We can't have two potential types, a &String or &str.

A solution to this is to convert the fallback message into a string with .to_string()

#![allow(unused)]
fn main() {
let maybe_secret : Option<&String> = use_query_map(cx)
    .get()
    .get("secret");
	.unwrap_or("No secret was provided".to_string());
}

But we still have a problem with it the fallback being an owned value, not a reference. We can add an ampersand (&) to make it a reference.

#![allow(unused)]
fn main() {
let maybe_secret : Option<&String> = use_query_map(cx)
    .get()
    .get("secret");
	.unwrap_or(&"No secret was provided".to_string());
}

We're close, but Rust's compiler will remind us that we're doing something a bit silly here. We're evaluating this expression to create a string and then telling unwrap to use a reference to that string as the fallback. The original string that the reference is for, doesn't actually stick around though. So what we need to do is move the fallback outside of this, and provide a reference to it, so that the value lives long enough.

Params map also needs to be declared separately because it gives up a reference to some of its data as well, through the Option<&String>. Rust is telling us, "Hey, you can't give a reference to a thing that we clean up right away."

#![allow(unused)]
fn main() {
use leptos::*;  
use leptos_meta::*;  
use leptos_router::*;  
  
#[component]  
pub fn App(cx: Scope) -> impl IntoView {  
    view! {  
        cx,  
        <Router>  
            <Routes>  
                <Route path="" view=|cx| view! { cx, <FormPage/> }/>  
                <Route path="/form-action-page" view=|cx| view! { cx, <FormActionPage /> }/>  
            </Routes>  
        </Router>  
    }}  
  
#[component]  
fn FormPage(cx: Scope) -> impl IntoView {  
    view! { cx,  
        <form action="/form-action-page" method="get">  
            <input type="text" name="secret" placeholder="Tell me a secret" />  
            <input type="submit" value="Send request" />  
        </form>  
    }}  
  
#[component]  
fn FormActionPage(cx: Scope) -> impl IntoView {  
    let fallback = "No secret was provided".to_string();  
    let qm = use_query_map(cx);  
    let pm = qm.get();  
    let the_secret = pm.get("secret")  
        .unwrap_or(&fallback);  
  
    view! { cx,  
        <h1>"You submitted a form and we ended up here!"</h1>  
        <p>{the_secret}</p>  
    }}
}

But as you can see, there's a problem here. Our secret is out in the open! Anyone look at the history will see! Now the whole world will know how much I love enchiladas.

Take a look at post_method_forms to help these secrets!

Keep them secret, keep them safe.

Post Method Forms (Form to Request Body)

A very old pattern in web development involves setting up routes which will be the action targets for forms. Their post data would be handled on that page. In fact, you could serve pages that were the result of a POST request type.

Leptos enforces a separation where routes respond to GET request types (urls or using method="get"), and POST request types are handled with server functions. They will provide a blank page as a response because they're not expected to respond. You can think of GET as a pull and POST as a push. We can, however, redirect after a POST/push to send the user to a route.

Server action dependencies

Server actions require communication between the client and the server. This requires data to be serialized and deserialized to transport the data from one to the other. We'll need to add the serde library to our cargo.toml to take care of this.

serde = {version = "1.0.152", features = ["derive"] }

Server actions hooked into the router

Leptos uses server functions to handle form actions. If we look at the main.rs of Cargo Leptos's setup file we'll see the following:

#![allow(unused)]
fn main() {
// main.rs

let app = Router::new()  
    .route("/api/*fn_name", post(leptos_axum::handle_server_fns))  
    .leptos_routes(leptos_options.clone(), routes, |cx| view! { cx, <App/> })  
    .fallback(file_and_error_handler)  
    .layer(Extension(Arc::new(leptos_options)));
}

I'd like to draw your attention to this line:

#![allow(unused)]
fn main() {
.route("/api/*fn_name", post(leptos_axum::handle_server_fns))  
}

Here we're setting up routes with the prefix "/api" followed by the function name. This is where server functions are hooked into the router to handle our POST requests.

Setting up the routes

We start off with two routes, the one that has the form and our destination after the form as been submitted.

#![allow(unused)]
fn main() {
// app.rs

use leptos::*;  
use leptos_router::*;  
  
#[component]  
pub fn App(cx: Scope) -> impl IntoView {  
    view! {  
        cx,  
        <Router>  
            <Routes>  
                <Route path="" view=|cx| view! { cx, <FormPage/> }/>  
                // We will not have a handler route here, 
                // because it will be created from the action  
                // <Route 
	            //    path="/form-action-handler" 
	            //    view=|cx| view! { cx, <FormActionHandler /> }
	            //  />                
	            <Route 
		            path="/form-action-processed" 
		            view=|cx| view! { cx, <FormActionProcessed /> }
				/>  
            </Routes>  
        </Router>  
    }}
}

We'll need some components setup which we'll do now:

#![allow(unused)]
fn main() {
// app.rs

#[component]  
fn FormPage(cx: Scope) -> impl IntoView {  
    view! { cx,  
        <form action="/api/form-action-handler" method="post">  
            <input 
	            type="text" 
	            name="secret" 
	            id="secret" 
	            placeholder="Tell me a secret" 
			/>  
            <input 
	            type="submit" 
		        value="Send request" 
			/>  
        </form>  
    }
}

#[component]  
fn FormActionProcessed(cx: Scope) -> impl IntoView {  
    view!{cx, 
	    "Server side response. This should 
	    display as a result of submitting the form."
	}  
}
}

Note that our action is now located at /api/form-action-handler.

Setting up the server functions

We'll add our server function:

#![allow(unused)]
fn main() {
// app.rs

#[server(FormActionHandler)]  
async fn form_action_handler(cx: Scope) -> Result<(), ServerFnError> {  
    println!("Form submitted");  
    Ok(())  
}
}

And we need to register the server function in our system before the routes are generated, so that a route can be generated for it. The #[server(FormActionHandler)] macro will expand and create a handle for us to register the derived server function. Keep in mind that the macro is writing a lot of boiler plate code for us. We're just adding the implementation here and declaring the intent. The handle to the macro expanded server function is FormActionHandler.

We'll add our registration call in our main.rs file. We don't need to prefix this with anything because it's in the top level of our library.rs. (it's not in a sub module).

#![allow(unused)]
fn main() {
// main.rs

let _ = FormActionHandler::register();

}

If we run our application with cargo leptos watch at this point we'll get something that doesn't quite work. Submitting a form will take you to a page with the following error:

Could not find a server function at the route form-action-handler. 

It's likely that you need to call ServerFn::register() on the server function type, somewhere in your `main` function.

Leptos generates its own special name spaced URL for the action. If we add the following code in main.rs we can get rust to spit out the actual "url".

#![allow(unused)]
fn main() {
let _ = FormActionHandler::register();

// 2 new temporary lines 
println!("{:?}", FormActionHandler::url() );
return;                                 
}

Which happens to be src-app.rs-form_action_handler.

Delete those two lines we added so that the application will run as expected and let's update our form with the new url part.

#![allow(unused)]
fn main() {
#[component]  
fn FormPage(cx: Scope) -> impl IntoView {  
    view! { cx,  
        <form action="/api/src-app.rs-form_action_handler" method="post">  
            <input type="text" name="secret" id="secret" placeholder="Tell me a secret" />  
            <input type="submit" value="Send request" />  
        </form>  
    }}
}

If we run our application now, we'll see "Form submitted" in the cargo leptos log stream. Commenting out the following line in main.rs will silence the debug log, making your dev logging easier to read:

#![allow(unused)]
fn main() {
// simple_logger::init_with_level(log::Level::Debug).expect("couldn't initialize logging");
}

Redirecting on submission

So now we're capturing the action, but we'd like to go to our destination "processed" page. We can use the

#![allow(unused)]
fn main() {
#[server(FormActionHandler)]  
async fn form_action_handler(cx: Scope) -> Result<(), ServerFnError> {  
    println!("Form submitted");  
    leptos_axum::redirect(cx, "/form-action-processed");  
    Ok(())  
}
}

Important note about SSR (Server Side) vs CSR (Client Side): If you write a use statement like use leptos_axum::redirect;, your server side binary will compile. You will receive an error compiling the client side version because leptos_axum is not a dependency for the client side code. When a server macro is expanded it runs in the context of the server code where leptos_axum is included. For this reason we can write leptos_axum::redirect() and be ok. A use statement exists in both server and client contexts. To solve this problem we can wrap the use statement in a conditional config.

#![allow(unused)]
fn main() {
cfg_if::cfg_if! {  
    if #[cfg(feature = "ssr")] {  
       use leptos_axum::redirect;  
    }  
}
}

Capturing post data

We want to capture form data from a form submission. We can do this in the server function by introspecting the request parts which are stored in the context provided to the server function.

We'll use use_context::<leptos_axum::RequestParts>(cx) to get the context with our type as a parameter to extract those components from the context.

We can match over these to start pulling data out. But the body of our parts, where our form data is stored, is in a bytes. We'll need to pass a reference to the "body" of our parts and convert the byes into a string slice &str.

#![allow(unused)]
fn main() {
match use_context::<leptos_axum::RequestParts>(cx) {  
    Some(parts) => {  
        let body: &str = std::str::from_utf8(&parts.body).unwrap_or_default();  
    },  
    None => {}  
}
}

body at this point will look like a query string with key=value&key=value formatting. We can naively parse this by splitting the string on ampersand (&) characters to get the key=value pairs. This is an extremely naive implementation and doesn't account for a variety of edge cases.

#![allow(unused)]
fn main() {
let body= std::str::from_utf8(&parts.body).unwrap_or_default();  
let data: Vec<Vec<String>> = body  
    .split('&')  // split the string into key=value, key=value
    .map(|kv| {  // convert the split kv strings into an array of [key, value]
        kv.split('=').collect()  
    })
	.collect();
}

It would be a better idea to use the form_urlencoded crate. The above is included for educational purposes.

Now we need something to store our secret in.

#![allow(unused)]
fn main() {
#[derive(Debug)]
struct SecretData(String);
}

And we'll implement default for this as well.

#![allow(unused)]
fn main() {
impl Default for SecretData {  
    fn default() -> Self {  
        Self("".to_string())  
    }
}
}

We can now kind of hack together a parser that will loop (iterate) over the key/value pairs to pluck the data out.

#![allow(unused)]
fn main() {
let form_data = SecretData::default();

for key_val_pairs in data.iter() {  
    match key_val_pairs.get(0).map(|k|k.deref()) {  
        Some("secret") => {  
            match key_val_pairs.get(1) {  
                Some(data) => {  
                    form_data = SecretData( data.to_string());  
                }  
                _ => {}  
            }  
        },        
        _ => {}  
    }
}

}

This is a verbose version to show you the different stages of unwrapping.

The whole thing looks pretty gnarly though:

#![allow(unused)]
fn main() {
let mut form_data = SecretData::default();  
  
match use_context::<leptos_axum::RequestParts>(cx) {  
    Some(parts) => {  
        let body= std::str::from_utf8(&parts.body).unwrap_or_default();  
        let data: Vec<Vec<&str>> = body  
            .split('&')  
            .map(|kv| {  
                kv.split('=').collect()  
            })            .collect();  
  
        for key_val_pairs in data.iter() {  
            match key_val_pairs.get(0).map(|k|k.deref()) {  
                Some("secret") => {  
                    match key_val_pairs.get(1) {  
                        Some(data) => {  
                            form_data = SecretData( data.to_string());  
                        }  
                        _ => {}  
                    }  
                },                
                _ => {}  
            }        
		}        
		println!("{:?}", form_data );  
  
    },  
    None => {}  
}
}

Let's wrap this all up in a nice function with some early returns to make the code less nested.

#![allow(unused)]
fn main() {
#[cfg(feature = "ssr")]  
fn parse_secret_data(cx: Scope) -> SecretData {  
    let parts = match use_context::<leptos_axum::RequestParts>(cx){  
        None => return SecretData::default(),  
        Some(parts) => parts  
    };  
  
    let body = match std::str::from_utf8(&parts.body) {  
        Err(_) => return SecretData::default(),  
        Ok(data) => data  
    };  
  
    let key_val_pairs: Vec<Vec<&str>> = body  
        .split('&')  
        .map(|kv| kv.split('=').collect() )  
        .collect();  
  
    for kvp in key_val_pairs.iter() {  
        match ( 
	        kvp.get(0).map(|k|k.deref()), 
	        kvp.get(1).map(|k|k.deref()) 
		) {  
            ( Some("secret"), Some(data)) => return SecretData( data.to_string() ),  
            _ => return SecretData::default(),  
        }    }  
    SecretData::default()  
}
}

Forwarding post data to the displayed/redirected route

The server isn't carrying state between the redirects. And actions are run independently of other aspects of Leptos. You can think of them as mini programs. We can, however, set a cookie that carries the data back to the client. We can also clear the data when we reach the destination page so that it's a short lived secret.

We'll use the following function to set our cookie:

#![allow(unused)]
fn main() {
#[cfg(feature = "ssr")]  
fn set_cookie(cx: Scope, name: &str, value: &str ) {  
    use axum::http::header::{HeaderMap, HeaderValue, SET_COOKIE};  
    use leptos_axum::{ResponseOptions, ResponseParts};  
  
    let response = use_context::<ResponseOptions>(cx)
        .expect("to have leptos_axum::ResponseOptions provided");  
    let mut response_parts = ResponseParts::default();  
    let mut headers = HeaderMap::new();  
    headers.insert(  
        SET_COOKIE,  
        HeaderValue::from_str(&format!("{name}={value}; Path=/"))  
            .expect("to create header value"),  
    );  
    response_parts.headers = headers;  
    response.overwrite(response_parts);  
}
}

We'll call set cookie from our handler:

#![allow(unused)]
fn main() {
#[server(FormActionHandler)]  
async fn form_action_handler(cx: Scope) -> Result<(), ServerFnError> {  
    let secret_data = parse_secret_data( cx );  
    set_cookie(cx, "my-secret", &secret_data.0);   // <--new
    leptos_axum::redirect(cx, "/form-action-processed");  
    Ok(())  
}
}

We'll use the following function read our cookie:

#![allow(unused)]
fn main() {
#[cfg(feature = "ssr")]  
fn cookie(cx:Scope, name: &str) -> Option<String> {  
  
    let parts = match use_context::<leptos_axum::RequestParts>(cx){  
        None => return None,  
        Some(parts) => parts  
    };  
  
    let cookies_hv = match parts.headers.get("cookie") {  
        None => return None,  
        Some(cookies_hv) => cookies_hv.as_bytes()  
    };  
  
    let cookies_str = match std::str::from_utf8(cookies_hv) {  
        Ok(cookies_str) => cookies_str,  
        Err(_) => return None  
    };  
  
    let key_val_pairs: Vec<Vec<&str>> = cookies_str  
        .split("; ")  
        .map(|kv| kv.split("=").collect() )  
        .collect();  
  
    for kvp in key_val_pairs.iter() {  
        if kvp.get(0).map(|k|k.deref()) == Some( name ) {  
            return kvp.get(1).map(|k|k.to_string());  
        }  
    }
      
    None  
  
}
}

When we submit the form, we trigger a server function in response. We're then forwarded to a route that displays the FormActionProcessed component. We'll update that component so that it reads our cookie's value and sets it to nothing after to clear it out.

#![allow(unused)]
fn main() {
#[cfg(feature = "ssr")]  
#[component]  
fn FormActionProcessed(cx: Scope) -> impl IntoView {  
    let secret = cookie(cx, "my-secret");  
    set_cookie(cx, "my-secret", "");  
    view!{cx,  
        "Server side response. This should display as a \  
        result of submitting the form. Your secret is: "  {secret}  
    }
}
}

In Summary

The final main.rs looks like this. Keep in mind, this is focusing on Server Side Rendering (SSR)

#![allow(unused)]
fn main() {
// main.rs

use std::ops::Deref;  
use std::str::{FromStr, Utf8Error};  
use leptos::*;  
use leptos_router::*;  
  
#[derive(Debug, Clone)]  
struct SecretData(String);  
  
impl Default for SecretData {  
    fn default() -> Self {  
        Self("".to_string())  
    }
}  

#[component]  
pub fn App(cx: Scope) -> impl IntoView {  
  
    view! {  
        cx,  
        <Router>  
            <Routes>  
                <Route 
	                path="" 
	                view=|cx| view! { cx, <FormPage/> }
				/>  
                <Route 
	                path="/form-action-processed" 
	                view=|cx| view! { cx, <FormActionProcessed /> }
				/>  
            </Routes>  
        </Router>  
    }
}  

  
#[cfg(feature = "ssr")]  
fn set_cookie(cx: Scope, name: &str, value: &str ) {  
    use axum::http::header::{HeaderMap, HeaderValue, SET_COOKIE};  
    use leptos_axum::{ResponseOptions, ResponseParts};  
  
    let response = use_context::<ResponseOptions>(cx)
	    .expect("to have leptos_axum::ResponseOptions provided");  
    let mut response_parts = ResponseParts::default();  
    let mut headers = HeaderMap::new();  
    headers.insert(  
        SET_COOKIE,  
        HeaderValue::from_str(&format!("{name}={value}; Path=/"))  
            .expect("to create header value"),  
    );  
    response_parts.headers = headers;  
    response.overwrite(response_parts);  
}  
  
#[cfg(feature = "ssr")]  
fn cookie(cx:Scope, name: &str) -> Option<String> {  
  
    let parts = match use_context::<leptos_axum::RequestParts>(cx){  
        None => return None,  
        Some(parts) => parts  
    };  
  
    let cookies_hv = match parts.headers.get("cookie") {  
        None => return None,  
        Some(cookies_hv) => cookies_hv.as_bytes()  
    };  
  
    let cookies_str = match std::str::from_utf8(cookies_hv) {  
        Ok(s) => s,  
        Err(_) => return None  
    };  
  
    let key_val_pairs: Vec<Vec<&str>> = cookies_str  
        .split("; ")  
        .map(|kv| kv.split("=").collect() )  
        .collect();  
  
    for kvp in key_val_pairs.iter() {  
        if kvp.get(0).map(|k|k.deref()) == Some( name ) {  
            return kvp.get(1).map(|k|k.to_string());  
        }  
    }  
    
    None  
}  
  
#[server(FormActionHandler)]  
async fn form_action_handler(cx: Scope) -> Result<(), ServerFnError> {  
    let secret_data = parse_secret_data( cx );  
    set_cookie(cx, "my-secret", &secret_data.0);  
    leptos_axum::redirect(cx, "/form-action-processed");  
    Ok(())  
}  
  
#[cfg(feature = "ssr")]  
fn parse_secret_data(cx: Scope) -> SecretData {  
    let parts = match use_context::<leptos_axum::RequestParts>(cx){  
        None => return SecretData::default(),  
        Some(parts) => parts  
    };  
  
    let body = match std::str::from_utf8(&parts.body) {  
        Err(_) => return SecretData::default(),  
        Ok(data) => data  
    };  
  
    let key_val_pairs: Vec<Vec<&str>> = body  
        .split('&')  
        .map(|kv| kv.split('=').collect() )  
        .collect();  
  
    for kvp in key_val_pairs.iter() {  
        match ( kvp.get(0).map(|k|k.deref()), kvp.get(1).map(|k|k.deref()) ) {  
            (Some("secret"), Some(data)) => return SecretData( data.to_string() ),  
            _ => return SecretData::default(),  
        }    
	}  
	
    SecretData::default()  
}  
  
#[component]  
fn FormPage(cx: Scope) -> impl IntoView {  
    view! { cx,  
        <form 
	        action="/api/src-app.rs-form_action_handler" 
	        method="post"
	    > 				 
            <input 
	            type="text" 
		        name="secret" 
		        id="secret" 
		        placeholder="Tell me a secret" 
			/>  
            <input 
	            type="submit" 
	            value="Send request" 
			/>  
        </form>  
    }
}  
  
#[cfg(feature = "ssr")]  
#[component]  
fn FormActionProcessed(cx: Scope) -> impl IntoView {  
    let secret = cookie(cx, "my-secret");  
    set_cookie(cx, "my-secret", "");  
    view!{cx,  
        "Server side response. This should display as a \  
        result of submitting the form. Your secret is: "  {secret}  
    }
}  
  
#[cfg(not(feature = "ssr"))]  
#[component]  
fn FormActionProcessed(cx: Scope) -> impl IntoView {  
    view!{cx, 
	    "Client side response. This should display as a result 
	    of submitting the form on the client."
	}
}
}

Server Functions

Official leptos_server documentation

A server function is a function that runs on the server. It can perform tasks or return values.

Design Pattern Aside: It's often a good idea to separate functions that do something, and functions that return values. This is called a "Command/Query" design pattern.

Isomorphic

When a server function is called in the context of the server, it will run as expected.

When a server function is called in the context or from the client, it will dispatch a request to the server, which will in turn run the server function on the server. Any returned data will be transparently returned by the original function call on the client, as if it had done the work. The ability to call the same function with these different implementations is why Leptos is considered an Isomorphic framework.

Why though?

You may ask, why do we care if we run code on the server or the client? It might feel like e should push as much code to the client as possible so that we don't have a lot of server costs and to cut down on the response time of requesting new data from the server. The answer to this question is always, "it depends."

There are some things we don't want to do on the client. A lot of these decisions come from the reality that the client is a multipurpose device with variable computational ability (slow) and we can't guarantee that users aren't doing anything sketchy on their clients (untrusted/unreliable).

By using server functions we can move computationally heavy tasks to the server, where they can also be cached. We can also move tasks like requests for data to a place where credentials to access that data are out of reach from the client.

An interesting side effect is that by doing more on the server we can often do the work once, cache it, and then serve it to multiple clients. This allows us to use less total power across our application (server and client) foot print. It may allow us to ship smaller client side applications which also reduces waste. It also allows us to create a quality and inclusive experience for lower power or older devices, encouraging people to keep those devices for longer.

Setup

Server functions are declared by prefixing a function with the #[server()] macro. The server macro's first argument should be the name you're giving to the server function. Here we've called it MyServerFunction

#![allow(unused)]
fn main() {
#[server(MyServerFunction)]
async fn my_server_fn(cx: Scope) ->Result<(),ServerFnError> {
	println!("You rang?");
	Ok(())
}
}

Source function specifications

  • The functions must be async.
  • It must have serializable arguments
  • It must return Result<T, ServerFnError>
  • The Ok returned type T must be serializable.
  • cx: Scope is an optional first parameter but is a snapshot of the scope provided by the server. It does not grant access to the reactive system like it does in components.

Server function naming

It's important to note that the name of the function my_server_fn and the name of the server function MyServerFunction are different. This allows us to address the prepared server function separately from its source function my_server_fn.

Server function registration

We now need to register the server function with Leptos. We do this by calling the register() method on the server function, in the main function. Thankfully we have the name of the server function as a handle to do this.

fn main(){
	_ = MyServerFunction::register();
}

The register method returns a Result<(),ServerFnError>. Rust will complain about the return value of the method call not being handled and requires us to specify that it exists and that we're doing something with it. We assign it to an underscore which tells rust, "We see this assignment and we're explicitly doing nothing with it... but we see it."

Server function dependencies

Server functions depend on the ability to serialize data sent from the client to the server function, and to deserialize the result of a server function on its return trip to the client. You will need to add serde to your cargo.toml as the serialization implementation.

serde = {version = "1.0.152", features = ["derive"] }

Calling server function

We previously stated that server functions must be async. But our component functions are synchronous. As you can see, there's a problem here. You can not await an async function inside a synchronous function.

There are two bridges inside Leptos that allow the reactive synchronous system to interact with the asynchronous action system:

  • Resources: Read values
  • Actions: Dispatch side effects.

Resources and Actions can be used to bridge anything into the reactive system. While they're necessary for server functions, they are not limited to server functions.

Reading data with resources

Official documentation

There are some things you will need to do to use data read from resources in your UI. Continue reading after this section for that information.

A resource is a signal, like any that you've made before, that has its value updated when its source data is updated. This allows the async task to run in the background and update the signal when it's ready, which our reactive system can react to without waiting for it.

A resource is created with the function create_resource, which takes three arguments:

  • cx — A scope context in which the resource will exist
  • source — A function that returns the arguments used for the fetcher.
  • fetcher — An async function (Future) which yields the resource's data when done.

Unary (one parameter) fetcher example

The following example exemplifies setting up a resource for a server function with one parameter. In this case, it is a function that accepts a single 8 bit integer (u8) called some_number.

#![allow(unused)]
fn main() {
#[server(MyServerFunction)]  
async fn my_server_fn(some_number: u8) -> Result<u8, ServerFnError> {
	// stuff happens here
	Ok(42)
}
}

Note that we need to return Result types from server functions, so we have to wrap our return value in Ok().

The first step is to create the signal which contains the arguments used to call the server function.

#![allow(unused)]
fn main() {
let default_server_fn_args : u8 = 6;

let (
	my_server_fn_args, 
	set_my_server_fn_args
) = create_signal(
	cx, 
	default_server_fn_args
);

let server_fn_result = create_resource(  
    cx,    
    my_server_fn_args,    // the signal listened to for changes
    my_server_fn          // the fn run with the server args signal value
);
}

Resources for server functions with no parameters or more than one parameter.

Resource fetchers allow only one parameter. Signals can only contain one value. These constraints require us to wrap aspects of the resource config in closures so that the signatures (data types) of the arguments matches the expected signatures of the create_resources() function parameters.

No parameters

A signal for a server function that has no parameters can be provided closures for their source and the fetcher can be wrapped in a closure with an unused parameter.

#![allow(unused)]
fn main() {
#[server(MyServerFunction)]  
async fn my_server_fn() -> Result<u8, ServerFnError> {
	Ok(1) 
}
}
#![allow(unused)]
fn main() {
let resource_that_takes_5_secs = create_resource(  
    cx,  
    ||{},    
    |_| my_server_fn()  
);
}
More than one parameter

More than one parameter can be achieved by using a tuple which will match the arguments provided, in order, to the server function.

#![allow(unused)]
fn main() {
#[server(MyServerFunction)]  
async fn my_server_fn(x: u8, y: u8) -> Result<u8, ServerFnError> {
	Ok( x + y ) 
}
}
#![allow(unused)]
fn main() {
let default_server_fn_args : (u8, u8) = (46, 2);

let (
	my_server_fn_args, 
	set_my_server_fn_args
) = create_signal(
	cx, 
	default_server_fn_args
);

let server_fn_result = create_resource(  
    cx,    
    my_server_fn_args,          // the signal listened to for changes
    |(x,y)| my_server_fn(x,y)  // Add parens to destructure
);
}

Displaying async data in the UI with <Suspense>

Official documentation

The UI system won't know to wait for new UI data. Leptos has a special component called a <Suspense> component which will wrap use of our resources (signals tied into async values). Failure to do this results in a common error when working with resources through non-suspense wrapped UI. You can almost think of suspense as an automatically awaiting and resolving UI Future.

Common Error: "You’re trying to update a Signal<usize> that has already been disposed of. This is probably either a logic error in a component that creates and disposes of scopes, or a Resource resolving after its scope has been dropped without having been cleaned up."

Suspense has two different states:

  • Fallback/Pending: If any signals used in the children have unresolved resources, a fallback view will be displayed via the fallback property on the Suspense component tag.
  • Resolved: If all resources are resolved, the children of the Suspense component will be displayed.

The syntax is as follows:

#![allow(unused)]
fn main() {
let resource_that_takes_5_secs = create_resource(  
	cx,                  // our scope/context   
    ||{},                // a closure that returns fetcher params
    |_| wait_5_seconds() // a closure for the fetcher, which as no params 
);

view!{cx,  
    <Suspense fallback=||"Loading...".to_string()>  
        { move || resource_that_takes_5_secs.read() }  
        "Loaded"  
    </Suspense>  
}
}

Above we created a <Suspense> container with a fallback closure. The closure must return an impl IntoView. Strings support IntoView, as do results of the view! macro. Then we move the resource into a block that can be re-run (a closure), calling read() on it. This hooks the resource into the suspense. The rule is, if a resource is used in the suspense, then the suspense must wait for the resources to all be available before it renders its children.

#![allow(unused)]
fn main() {
async fn wait_5_seconds() {  
    futures_timer::Delay::new(Duration::from_secs(5)).await;  
}
}

We use the futures_timer crate (installed with cargo add futures_timer) so that we get proper async waiting. If we used thread::sleep(Duration::from_secs(5)) we'd end up with errors and the whole application would be pausing synchronously.

Writing data or dispatching side effects with actions

Action Official Documentation create_action Official Documentation

Actions allow us to make async calls in our synchronous reactive system. We can use actions to make server function calls by calling them an action's task.

Actions are created with the create_action function which accepts two arguments:

  • cx — A scope context in which the action will be run
  • task — A function to run asynchronously. Arguments should always be passed by reference

An actions tasks is triggered by calling the dispatch method on the Action.

#![allow(unused)]
fn main() {
#[server(MyServerFunction)]  
async fn my_server_fn(x: u8, y: u8) -> Result<u8, ServerFnError> {
	Ok( x + y ) 
}
}
#![allow(unused)]
fn main() {
let my_action = create_action(cx, |&(x: u8,y: u8)| {
  my_server_fn(x,y)
}

let my_action_task_args = (46,2);
my_action.dispatch( my_action_task_args );
}

Actions have a few handy methods we can call on them aside from dispatch:

  • input — the argument currently running
  • pending — whether the call is pending
  • value — the most recent returned result
  • version — how many times the action has run. Useful for reactively updating something else in response to a dispatch and response

<Transition />

#async-bridge

Official documentation

A transition is like suspense with one key difference; if the resources used in the children are updating (not initially loading), the current ui will stay in place until new UI can be created when the updated resource is ready. Transition also provides a pending or "updating" state that we can set to display loading messages.

Let's create a little example where we can add to a list of items. This is a pretty common pattern of update a parameter with interaction, query new data, and update the ui with the new data.

The async function

To start, we'll make our async function.

Our function will need a delay so that we can simulate network latency. We'll add a crate to our project by typing the following in the terminal when in the rust project folder:

cargo add futures_timer

We can now use futures_timer to provide a WASM friendly async delay. Using standard thread sleeping would sleep the main thread of the application, including its async runtime. We don't want that.

Our function will accept a number and generate an array of numbers from 1 up to the count, so the return type will be a Vec of numbers.

We'll also add a delay from the futures_timer crate and await it so that the time passes before we return with our new array of enumerated items.

#![allow(unused)]
fn main() {
async fn pretend_external_list_items(count: u32) -> Vec<u32> {  

    futures_timer::Delay::new( 
	    std::time::Duration::from_secs(1)
	).await;  
	
    (1..=count).collect()  
}
}

The component

Now we'll create our example component:

#![allow(unused)]
fn main() {
#[component]  
fn TransitionExample(cx: Scope) -> impl IntoView {  
	  
}    
}

We'll setup a resource like we did in the server functions tutorial.

#![allow(unused)]
fn main() {
#[component]  
fn TransitionExample(cx: Scope) -> impl IntoView {  
	
	// we create a signal that stores the size of the list 
    let (list_size, set_list_size) = create_signal(cx, 1_u32); 

	// we use the size as arguments for our 
	// function as a resource
    let list_items = create_resource(
	    cx, 
		list_size, 
		pretend_external_list_items
	);
}
}

We'll now add a view to our component that uses our resource. We'll start with a <Transition> tag and add a fallback view, just like with <Suspense> items. Recall that Transition is like <Suspence> but with glitter.

<Transition>s children will be set to a view that contains a <For /> tag to loop/iterate over the resource's value. The async function returns an option so we'll need to read() it and then upwrap_or_default(). Then for the view we create output for each item as an HTML list item.

#![allow(unused)]
fn main() {
#[component]  
fn TransitionExample(cx: Scope) -> impl IntoView {  
	
	let (list_size, set_list_size) = create_signal(cx, 1_u32); 

	let list_items = create_resource(
	    cx, 
		list_size, 
		pretend_external_list_items
	);
	
	view!{cx,  
	    <Transition  
	        fallback=move||view!{cx, <p>"Loading"</p>}  
		> 
			{view!{cx,  
	            <ul>  
	                <For  
	                    each=move||list_items.read().unwrap_or_default()  
	                    key=move|item|item.clone()  
	                    view=move|item|view!{cx,<li>{item}</li>}  
	                />            
				</ul>  
		        }       
			}    
		</Transition>  
	}
}
}

To make this more interactive we'll add a button that increments the list size:

#![allow(unused)]
fn main() {
<button on:click=move|_| set_list_size.set( list_size.get() + 1 ) >  
    "Add an item"  
</button>
}

And we'll add a header that shows us how big the list should be. This is a nice indicate of expectation on action, showing the delay between the header being updated and the list growing.

#![allow(unused)]
fn main() {
	<h2>"A list of " {list_size} </h2>
}

When assembled we're left with the following component:

#![allow(unused)]
fn main() {
```rust
#[component]  
fn TransitionExample(cx: Scope) -> impl IntoView {  
	
	let (list_size, set_list_size) = create_signal(cx, 1_u32); 

	let list_items = create_resource(
	    cx, 
		list_size, 
		pretend_external_list_items
	);
	
	view!{cx,  
		<h2>"A list of " {list_size} </h2>
		<button on:click=move|_| set_list_size.set( list_size.get() + 1 ) >  
		    "Add an item"  
		</button>
	    <Transition  
	        fallback=move||view!{cx, <p>"Loading"</p>}  
		> 
			{view!{cx,  
	            <ul>  
	                <For  
	                    each=move||list_items.read().unwrap_or_default()  
	                    key=move|item|item.clone()  
	                    view=move|item|view!{cx,<li>{item}</li>}  
	                />            
				</ul>  
		        }       
			}    
		</Transition>  
	}
}
}

The last piece of this example is adding a signal to hold the boolean value for if the transition component is updating (by default it won't be because it'll be loading):

#![allow(unused)]
fn main() {
let (is_list_updating, set_is_list_updating) = create_signal(cx, false);
}

And we'll update the <Transition> component, setting the property to accept the signal.

#![allow(unused)]
fn main() {
<Transition  
    fallback=move||view!{cx, <p>"Loading"</p>}  
    set_pending=set_is_list_updating.into()  
>
}

The above into() call on set_is_list_updating might look weird to you. The optional property set_pending accepts a SignalSetter<bool>. We know that set_is_list_updating is a WriteSignal<bool>. An implementation is written allowing us to turn the WriteSignal into the SignalSetter, which we can perform with the into() method.

We'll use our hand <Show> Leptos UI tag to conditionally display an updating message with a fallback of an empty view.

#![allow(unused)]
fn main() {
<Show  
    when=move || is_list_updating.get()  
    fallback=|_| ""  
>  
    <p><i>"Updating the list"</i></p>  
</Show>
}

When all is said and done we're left with a nice clear example of all of the great features wrapped up in <Transition>, showcasing how it works with some of Leptos' event handlers, signals, and UI tags.

#![allow(unused)]
fn main() {
use leptos::*;  
  
#[component]  
pub fn App(cx: Scope) -> impl IntoView {  
    view! {  
        cx,  
        <TransitionExample/>  
    }}  
  
#[component]  
fn TransitionExample(cx: Scope) -> impl IntoView {  
  
    let (list_size, set_list_size) = create_signal(cx, 1_u32);  
    let list_items = create_resource(cx, list_size, pretend_external_list_items);  
  
    let (is_list_updating, set_is_list_updating) = create_signal(cx, false);  
  
    view!{cx,  
        <h2>"A list of " {list_size} </h2>  
  
        <button on:click=move|_| set_list_size.set( list_size.get() + 1 ) >  
            "Add an item"  
        </button>  
  
        <Show  
            when=move || is_list_updating.get()  
            fallback=|_| ""  
        >  
            <p><i>"Updating the list"</i></p>  
        </Show>  
  
        <Transition  
            fallback=move||view!{cx, <p>"Loading"</p>}  
            set_pending=set_is_list_updating.into()  
        >           
			{ view!{cx,  
                <ul>  
                    <For  
                        each=move||list_items.read().unwrap_or_default()  
                        key=move|item|item.clone()  
                        view=move|item|view!{cx,<li>{item}</li>}  
                    />
				</ul>  
	            }
			}        
		</Transition>  
    }  
}  
  
async fn pretend_external_list_items(count: u32) -> Vec<u32> {  
    futures_timer::Delay::new( std::time::Duration::from_secs(1)).await;  
    (1..=count).collect()  
}
}

Form Actions with <ActionForm>

As we've seen from the other form lessons, working with forms can involve some work to hook things up. Leptos comes with a special component called an ActionForm which allows us to more easily set an action as a form handler. Even better yet, the form will attempt to resolve client side and fallback to the server. This is exactly what we want for progressively enhanced applications.

This article makes the assumption that you've reviewed server functions and have a basic cargo leptos setup.

Initial Setup

Before we can start we'll need to add Serde.
cargo add serde

We'll start by creating a route for our form page.

#![allow(unused)]
fn main() {
// app.rs

#[component]  
pub fn App(cx: Scope) -> impl IntoView {  
    view! {  
        cx,  
        <Router>  
            <Routes>  
                <Route path="" view=|cx| view! { cx, <FormPage/> }/>  
            </Routes>  
        </Router>  
    }}
}

We need to setup the Leptos Component that will handle our root route with path "". We're calling it FormPage. This page will have a form that accepts a user's super secret message.

#![allow(unused)]
fn main() {
// app.rs

#[component]  
fn FormPage(cx: Scope) -> impl IntoView {  
    view! { cx,  
        <form>  
            <input type="text" name="secret" placeholder="Tell me a secret" />  
            <input type="submit" value="submit" />  
        </form>  
    }
}
}

The above component uses a standard html form element. We're going to upgrade this to Leptos' ActionForm element (note the capitalization).

#![allow(unused)]
fn main() {
// app.rs

#[component]  
fn FormPage(cx: Scope) -> impl IntoView {  
    view! { cx,  
        <ActionForm>  
            <input type="text" name="secret" placeholder="Tell me a secret" />  
            <input type="submit" value="submit" />  
        </ActionForm>  
    }
}   
}

Next we'll need to create a server action and provide it as the argument for the ActionForm's action parameter.

We'll start by creating a server action which will be called MyServerAction (1) and adding the action to our ActionForm (2) .

#![allow(unused)]
fn main() {
// app.rs

#[component]  
fn FormPage(cx: Scope) -> impl IntoView {  
	
	let my_form_action = create_server_action::<MyServerAction>(cx); // 1
	
	view! { cx,  
        <ActionForm action=my_form_action>  // 2
            <input type="text" name="secret" placeholder="Tell me a secret" />  
            <input type="submit" value="submit" />  
        </ActionForm>  
    }
}
}

We need to write the function that will process our server action. Note the server function name MyServerAction is listed in the macro. This is where the server function receives its name (1). The name of the function itself is usually similar as convention but it is not a requirement.

#![allow(unused)]
fn main() {
// app.rs

#[server(MyServerAction, "/api")] // 1 
pub async fn my_server_action(cx: Scope ) -> Result<(), ServerFnError> {  
    println!("Form submitted");  
    Ok(())  
}
}

And we'll register this server function (1) in our applications entry point. We'll also comment out the simple_logger (2) in the cargo leptos template so that we can more easily see the "Form submitted" log message from our server action MyServerAction.

// main.rs

#[cfg(feature = "ssr")]  
#[tokio::main]  
async fn main() {
	let _ = MyServerAction::register(); // 1
	
	// simple_logger::init_with_level(log::Level::Debug)
	//	.expect("couldn't initialize logging"); // 2

	// ...
	}

Using ActionForm Data

The really neat thing about Leptos ActionForm is that the names of fields become arguments to the server function. As a result, we can easily get the values of the form fields without needing to pick apart the request parts with something like use_context::<leptos_axum::RequestParts>(cx).

#![allow(unused)]
fn main() {
#[server(MyServerAction, "/api")]  
pub async fn my_server_action(cx: Scope, secret: String ) -> Result<(), ServerFnError> {  
    println!("Form submitted with {}", secret);  
    Ok(())  
}
}

Returning ActionForm Data

We can return data from our action form to use in our view by calling the value() method on the server_action (1). We'll print it out to the screen to get a better sense of how this all works together (2).

#![allow(unused)]
fn main() {
#[component]  
fn FormPage(cx: Scope) -> impl IntoView {  
  
    let my_form_action = create_server_action::<MyServerAction>(cx);  
  
    // value contains most recently-returned value  
    let form_value = my_form_action.value();  // 1
  
    view! { cx,  
        <ActionForm action=my_form_action>  
            <input type="text" name="secret" placeholder="Tell me a secret" />  
            <input type="submit" value="submit" />  
        </ActionForm>  
        "Returned from the server fn: " {form_value} // 2
    }}
}

We'll need our server function to actually return something for this to work. To do this we';; change the return type to Result<String, ServerFnError> and provide an Ok(String) as the last statement to hard code the return value.

#![allow(unused)]
fn main() {
#[server(MyServerAction, "/api")]   
pub async fn my_server_action(cx: Scope ) -> Result<String, ServerFnError> {  // 1
    println!("Form submitted");  
    Ok("Server function result".to_string())  // 2
}
}

We could combine these two ActionForm features to echo the secret back out.

#![allow(unused)]
fn main() {
#[server(MyServerAction, "/api")]
pub async fn my_server_action(cx: Scope, secret: String ) -> Result<String, ServerFnError> {  
    let echoed_string_from_server = format!("{} from server", secret);  
    Ok(echoed_string_from_server)  
}
}

Scope and Runtime

Intro to scopes

When we think about scope we can think about bound and context. A chef would consider their kitchen to be a scope. It provide the context—ingredients, tools/implements, etc—for them to do their work. If we look around we can make note of our own contexts and scope a we observe our surroundings.

This same idea of scope exists in programming. If we look at a function definition we can see a sort of doorway or entry point through a functions properties that allow us to pass arguments into the function body. Once we're in the function body we're in a scope. We're bounded by curly braces. Rust affords the ability for us to access some ideas from outside the scope but for the most part, what we do is contained within those bounds.

We would say that Rust is lexically scoped. This means that variables initialized within a scope—contained within curly braces {...}—exist within the scope they were created. We know from previous articles that Rust's solution to not having a garbage collector is to clean up anything within a scope once we exist it. The only way we can save information is to return it out of the scope with the return keyword, or having it as the last expression in our scope.

You'll start to see small scopes all over the place. We use them for match arms, for if statements, for iterator bodies, etc.

The Problem

As you can imagine, creating things within a scope only to have them cleaned up later poses a bit of a problem for a web framework. In this article we'll cover a few core aspects of Leptos that answer the following questions:

1 - How can we create assignments (signals, contexts, etc) that can outline a component's function scope?

2 - How can we clean up assignments (memory) if it's decoupled from a component's function scope?

By answering these question you'll have a clear picture of how data is stored in Leptos' reactive system, what Scope is, and why it's a required argument to so many functions.

The solution

The two solutions are:

1 - How can we create assignments (signals, contexts, etc) that can outline a component's function scope? We store the data outside of the component.

2 - How can we clean up assignments (memory) if it's decoupled from a component's function scope? We create a relationship between the component and it's long lived data store so that if a component gets cleaned up we can find the associated data and clean it up as well.

Runtime

At the core of each Leptos application is a runtime. A new runtime is created in the following cases:

1 - A client side application starts 2 - A web request is being handled 3 - A server function is being called

The runtime is a singleton (there is a single instance of it) that holds the current state of the reactive system. In it you will find (this is not exhaustive):

Reactive components that you created inside components:

  • A list of signals and signal_subscribers
  • A list of effects and effect_sources
  • A list of resources
  • A list of scope_contexts

Scopes created from instantiated Leptos components:

  • A list of the Scopes with references to reactive components created within their context
  • A list of each Scopes parents`
  • A list of each Scopes children` By capturing the parents and children Leptos is able to create a graph.

Scope

When a component is created, a new Scope is created. This scope is actually created in the runtime and injected as the component's context cx. In conversational terms we say, "Hey reactive system/runtime. I've got a new component that I'm setting up. Can you give me a new scope?" It says, "Sure, I've got this thing called SlotMap that I'm going to use for this. It says the next number is 42. That's going to be your scope number." We actually get a runtime id along with the scope id. The runtime id is important for the server side handling because it helps us refer to the runtime handling our request instead of a neighbours. This happens without developers knowing but it helps us to understand what's going on. It also explains why we can copy scopes so easily. Scopes only contain two number which implement the Copy trait.

It maybe seem a little strange at first. I like to think of the Scope used in components as an index into a graph that mirrors my ui, only in the runtime this graph contains my reactive data.

Scope and runtime in use with create_signal

Let's take a look at how create_signal uses scopes, which use our active runtime, to store a value that can outlive the lexical scope.

When we say create_signal(cx, some_value), in conversational terms we're saying, "I want to create a signal for some_value. My context cx is a Scope that shows I'm using runtime 1 and the id of this scope is 3." Leptos says, "Cool cool. Ok, I'll make this signal and store it in my big list of signals. It's at id 42. Oh, I see you made this request in scope 3. I'm going to make a note here in my list of scopes that scope three has a signal with the id 42. That way if the scope gets cleaned up, I can clear the signal. Oh, and here are the read and write handers you asked for (the return value of create_signal)."

This is the real key. When we create something in a scope, it actually gets added to the runtime and we annotate the scope with an id for reference. When the scope is cleaned up, we can clear all of the assets in its reference list.

Here's a more visual way of looking at it:

We setup the request t create a signal with a scope

MyComponent is called as Leptos renders a view!

Inside MyComponent(cx:Scope)
	// Leptos calls this function and provides a new scope as
	// the value of cx = (we'll say runtime: 1, id: 3)
	-> create_signal( cx, false ) 

For reference, a Scope is:

#![allow(unused)]
fn main() {
pub struct Scope {  
    pub runtime: RuntimeId,  
    pub id: ScopeId,  
}
}

create_signal does its work

I've got a Scope{runtime:1, id:3}
I'll request a signal from runtime

runtime does its work

I've got a request to make a signal
I'll create a new signal and store it in my `signals`
#![allow(unused)]
fn main() {
pub(crate) struct Runtime {  
    // ...
    // A fancy array that when you put something in, gives you the index out
	pub signals: RefCell<SlotMap<SignalId, Rc<RefCell<dyn Any>>>>
	// ...
}
}
I now have a location where I put the actual signal. It happens to be 42
I'll now upate my ledger of "stuff associated with scopes".
I know the scope id was 3, so I'll update the item with id 3 from scopes and push in a reference to this new signal so that it can be cleaned up when necessary.
#![allow(unused)]
fn main() {
// leptos-reactive/runtime.rs

#[derive(Default)]  
pub(crate) struct Runtime {  
    // ...
    pub scopes: RefCell<SlotMap<ScopeId, RefCell<Vec<ScopeProperty>>>>,
    // ...
}
}
#![allow(unused)]
fn main() {
pub(crate) enum ScopeProperty {  
    Signal(SignalId),  
    Effect(EffectId),  
    Resource(ResourceId),  
}
}

Wrapping up

And that's it ^.^

To summarize:

  • A runtime stores all of the actual reactive data.
  • Scope are numerical references to key associated reactive data in a runtime
  • When a scope is cleaned up, it's associated reactive data is cleaned up
  • When a component is cleaned up, its scope are cleaned up

Passing data around your application

#accessing-data #state #scope #context

Applications can get complicated quickly. To combat this we often split up functionality into multiple component and compose those components back together. This always seems like a good idea at first and it often is, but we soon realize there are tradeoffs and simplicity that we give up when splitting up our code into components. This lesson will focus on how to manage data across these news boundaries.

  1. passing data using component properties
  2. passing data by context (static)
  3. passing data by context (reactive)

Let's look at a silly example:

use leptos::*;

fn main() {
    mount_to_body(|cx|{
        let name = "Beans";
        let animal = "Cat";
        let age = "20 weeks";
        let fav_phrase = "meow?";

        view!{
            cx,
            <div class={animal}>
                <h2>{name}</h2>
                <ul>
                    <li>"Age: "{age}</li>
                    <li>"Says "{fav_phrase}</li>
                </ul>
            </div>
        }
    });
}

This example is a little trivial, but my hope is that it'll illustrate patterns that you can apply in your applications.

Let's pull out this little block into a separate component.

#![allow(unused)]
fn main() {
<ul>
	<li>"Age: "{age}</li>
	<li>"Says: "{fav_phrase}</li>
</ul>
}

We can copy this out and put it in a different view.

#![allow(unused)]
fn main() {
#[component]
fn Details(cx:Scope) -> impl IntoView {
    view!{
        cx,
        <ul>
            <li>"Age: "{age}</li>
            <li>"Says: "{fav_phrase}</li>
        </ul>
    }
}
}

And we'll replace the extracted html with our component.

#![allow(unused)]
fn main() {
//...
let age = "20 weeks";
let fav_phrase = "meow?";
view!{
	cx,
	<div class={animal}>
		<h2>{name}</h2>
		<Details />
	</div>
}
}

We can immediately see a problem. How do we get age and fav_phrase into our new component. Our main component got smaller and cleaner, which feels like a step forward, but now we need to create properties for this new Details component so that we can pass our age and fav_phrase in. Be aware that our code is getting more complicated by doing this. Not everything needs to be split into unique individual components. :)

Passing data using component properties

Step one is to add our properties to the the Leptos component tag. I've deliberately named the properties on the <Details /> component separately from their variable names to disambiguate how this all wires together. I also changed age to an unsigned 8bit integer to add some variation to the property types.

We lose shared scope and immediate access to data ( - simplicity ) but we gain encapsulation, reuse, and composability ( + decoupling/portability )

#![allow(unused)]
fn main() {
let age: = 20_u8;
let fav_phrase = "meow?";
view!{
	cx,
	<div class={animal}>
		<h2>{name}</h2>
		<Details the_age_in_weeks={age} the_phrase={fav_phrase}/>
	</div>
}
}

Next we need to update our component to accept those properties. The property names become arguments in the Leptos component's function.

#![allow(unused)]
fn main() {
#[component]
fn Details( 
	cx:Scope, 
	the_phrase: &'static str, 
	the_age_in_weeks: u8
) -> impl IntoView {

	view!{
        cx,
        <ul>
            <li>"Age: "{age} weeks</li>
            <li>"Says: "{phrase}</li>
        </ul>
    }
}
}

The properties listed in the Component do not need to be in the same order as the Leptos component function's parameters.

Passing data using context

There are situations that will arise where one component may have multiple child components and some child deeeep in the hierarchy may require a piece of data from one of its ancestors. If we were using properties we would need to add a new property and Leptos component function parameter for that data in every single component in the lineage.

Ancestor (has data)
  ↳ Child (needs to accept data to pass to child)
	  ↳ Child (needs to accept data to pass to child)
		  ↳ Child (needs data)

The nice thing about component properties is that they're visible and declarative. When you write the component it makes it's dependencies clear as part of its definition. Unfortunately that isn't going to work here so we're going to make a trade-off.

We lose declarative data dependencies ( - clarity ) but we gain position independence ( + decoupling/portability )

If we look at the component hierarchy illustrated above we'll notice something—all of the components are passing a shared piece of data down. We're saying, "Everything in this tree needs to carry the data down to the child that needs it." It might already be hitting you that we already do this with scope! Each Leptos component accepts it's predecessors scope.

#![allow(unused)]
fn main() {
fn Details( 
	cx:Scope,  // <--  here's our scope, a window into the reactive system
	the_phrase: &'static str, 
	the_age_in_weeks: u8
) -> impl IntoView {

	view!{
        cx,
        <ul>
            <li>"Age: "{age} weeks</li>
            <li>"Says: "{phrase}</li>
        </ul>
    }
}

}

To solve the above problem we can reserve a part of the scope for our data. We call this context. It's named context because it forms part of the the context in which the application runs. This is also likely why the property cx is used as the name for Scope in Leptos component's functions.

Using context requires to steps:

1 - We setup the context, adding it to our scope/reactive system 2 - We use the context, accessing it from our scope/reactive system.

Setting up context

You may be thinking, "How does Leptos know where to put my special piece of data?" In a lot of traditional systems you would provide a key and then store a value. Leptos' scope uses types as the identifier instead of a key. This makes the system efficient and more safe. It saves us from comparing keys or using a hash map. It does however pose a limitation—you must make a unique type for each item you want to store in the scope as context.

We're creating a struct called Pet Details here to hold our context data. Here we're using a tuple but you could just as well use a struct with named arguments.

Important: Context should always be created/provided at the higher level of the hierarchy and passed down. Do not create contexts and consume them from parents/ancestors.

#![allow(unused)]
fn main() {
use leptos::*;

#[derive(Clone)]
struct PetDetails(String, u8);

// alternate

#[derive(Clone)]
struct PetDetails{
	phrase: String,
	age: u8,
};
}

Now we'll use the function provide_context with our scope cx and our PetDetails. Note that our <Details /> component in the view! macro has no properties.

fn main() {
    mount_to_body(|cx|{
        let name = "Beans";
        let animal = "Cat";
        let age = 20_u8;
        let fav_phrase = "meow?!";
        
        provide_context(
            cx,
            PetDetails(
                fav_phrase.to_string(), 
                age
            )
        );
        
        view!{
            cx,
            <div class={animal}>
                <h2>{name}</h2>
                <Details />
            </div>
        }
    });
}

Accessing/using context

We can update our details component to pull the context out of the scope by using the turbofish syntax to specify the type, ::<PetDetails>.

#![allow(unused)]
fn main() {
#[component]
fn Details(cx:Scope)-> impl IntoView {
    let data = use_context::<PetDetails>(cx).unwrap();
    let phrase = data.0;
    let age = data.1;
    view!{
        cx,
        <ul>
            <li>"Age: "{age} "weeks"</li>
            <li>"Says: "{phrase}</li>
        </ul>
    }
}

}

An important thing to be aware of is that contexts are not signals. They are values. They are not inherently reactive. Reactivity requires that a value that can be recalculated. A context is just a view into our reactive system, i.e. the scope.

Passing reactive data using context

Context is just a value stored in the scope. It is not inherently reactive. You can however store signals as a context to gain the ability to embed reactive values or update those values deeper in the hierarchy. You can think of it as embedding a "getter" and "setter" as things you can pass throughout your system where as before we were passing the actual value. Here we're passing an interface to the value.

Here's an example of how we might initialize this:

#![allow(unused)]
fn main() {
use leptos::*;  
  
#[derive(Copy,Clone)]  
struct MyReactiveContext(ReadSignal<u8>, WriteSignal<u8>); // 1
 
}

(1) Here we create a struct that is a tuple with a read and write signal. These are our interface to the reactive values. They are not the values, but can be turned into the values. We will be able to provide a context with the MyReactiveContext type, which will store these two signals.

Next we'll provide the context to our scope and subsequent child scopes:

fn main() {  
    mount_to_body(|cx|{  
        let (reader, writer)  = create_signal( cx, 0_u8 );  // 1
        provide_context(  // 2
            cx,  // 3
            MyReactiveContext(reader, writer) // 4
        );  
        view!{  
            cx,  
            "Root: " {reader} // 5
        }    
	});
}

(1) We initialize the signal with a value. It's type is of unsigned 8 bit integer with a value of 0. In rust we can add type suffixes as a shorthand to embed the type in number literals. (2) We create context though provide_context() by giving the function our current scope cx (3) and our data (4). Recall that provide context will reserve a unique space for the MyReactiveCotnect type. (5) We output the value for debug and visualization.

Now let's add a child component and update the value from inside the child to see how our change can impact parent components.

#![allow(unused)]
fn main() {
#[component]  
fn ChildOne(cx:Scope)-> impl IntoView {  
    let my_reactive_context = use_context::<MyReactiveContext>(cx).unwrap(); // 1
    let reader = data.0;  // 2
    let writer = data.1;  // 3
    writer.set(1);  // 4 
    view!{  
        cx,  
        "- Child One: " {reader}  // 5
    }
}
}

(1) We grab data from our scope (which contains anything that happened in it's ancestral lineage) using use_context. We can query our specific data by providing the type in a turbo fish ::<MyReactiveContext>. The use_context function requires a scope to look into (cx ) as an argument. (2) For simplicity we'll pull the reader and (3) writer out of the tuple struct. (4) Let's update the reactive value to see where things change ^.^ (5) We'll output the value for debug visualization

The last step here is to ad this component to our main view!

#![allow(unused)]
fn main() {
view!{  
	cx,  
	"Root: " {reader} <br />
	<ChildOne />
}  
}

All together our application looks like this:

use leptos::*;

fn main() {  
    mount_to_body(|cx|{  
        let (reader, writer)  = create_signal( cx, 0_u8 ); 
        provide_context(  
            cx,  
            MyReactiveContext(reader, writer)
        );  
        view!{  
            cx,  
            "Root: " {reader} "<br />"
			<ChildOne />
        }    
	});
}

```rust
#[component]  
fn ChildOne(cx:Scope)-> impl IntoView {  
    let my_reactive_context = use_context::<MyReactiveContext>(cx).unwrap();
    let reader = data.0;
    let writer = data.1;
    writer.set(1);
    view!{  
        cx,  
        "- Child One: " {reader}
    }
}

If we run this with trunk serve we'll end up with a web page that contains:

Root: 1  
- Child One: 1

Magic! We've used content deeper in our hierarchy to update its parent!

This is the beauty of context and signals. They allow us to pass a capability or interface around our application. Be careful not to overuse context. If you can use properties they're almost always a better choice. That said, there are times you'll need a way to access state across the application and context is there to make it safe and easy.

Initiative Tracker

A tutorial project in which we build an initiative and game tracker for a role playing game like Dungeons and Dragons.

Chat

Structuring Applications

You have two options:

  1. Make a new crate from inside /src of a crate and load it as a module in cargo.toml
  2. Split up code into modules, load those modules into a folder that becomes the namespace through a mod.rs file in a folder of the aformentioned name

Crates

Important Distinctions

  • library crates expose functions that other crates can call and use
  • binary crates are meant to be run on their own.

Read The Cargo Book — https://web.mit.edu/rust-lang_v1.25/arch/amd64_ubuntu1404/share/doc/rust/html/cargo/index.html

Takeaways

  • Cargo.toml

    • in a bin, put cargo.lock in git so that it rebuilds with the same deps versions
    • cargo upgrade // update dependencies
    • Versioning (semver)
      • Before you reach 1.0.0, anything goes, but if you make breaking changes, increment the minor version. In Rust, breaking changes include adding fields to structs or variants to enums.
      • After 1.0.0, only make breaking changes when you increment the major version. Don’t break the build.
      • After 1.0.0, don’t add any new public API (no new pub anything) in tiny versions. Always increment the minor version if you add any new pub structs, traits, fields, types, functions, methods or anything else.
    • Profiles can be created to determin how cargo run acts
    • You can create config dependent dependencies
  • Create sub crates by running cargo new cratename --lib

    • Make a requirement in cargo.toml with cratename = {path:cratename}
    • Use the crate in your main project with extern crate cratename;

Modules

Read — https://doc.rust-lang.org/rust-by-example/mod.html

Take aways

  • Make a mod with the syntax mod modname{...}
    • expose things to the public with pub keyword
    • use them via modname::modfunctionname()
  • Modules can be nested
  • Modules can have visibility for their fields too
  • use binds the last component of a path as the accessible name
    • use foo::bar::baz
    • use foo::bar::{baz,bazsibling}
    • use foo::bar::baz as renamedbaz
  • use statements can be used in block scopes for more concise code
  • Dynamic module roots
    • self:: refers to current scope as root
    • super:: refers to parent scope as root
    • crate:: refers to crate root as scope
  • A module can be loaded into a *.rs file with mod modname
    • rust will look for modname.rs and modname/mod.rs
    • public components will be exposed on the name space
    • A common pattern is
      • A module at modfolder/somemod.rs with public components
      • A module at modfolder/mod.rs with pub mod somemod; with brings somemod into the scope of mod.rs
      • Use the module in main.rs with mod modfolder; and calling modfolder::somemod::somemodfn()

Workspaces

Create an environment with multiple crates which depend on eachother but keep them separate for faster compilation, separation, etc

Read - https://doc.rust-lang.org/book/ch14-03-cargo-workspaces.html

Takeaways

  • Workspaces allow for efficient code reuse because you can have multiple distinct crates in a single work space
  • Create a folder and put a cargo.toml in it
   [workspace] 
   members = [ 
   	"some-binary-crate", 
   	"some-lib-crate", 
   	"another-lib-crate"
   ]
  • Crate the crate AFTER specifiying its membership or you'll get warnings
cargo new some-lib-crate --lib
  • Specify dependencies on other crates in the workspace with the following cargo.toml entry
some-lib-crate = { path = "../some-lib-crate" }
  • Use bash cargo run -p some-binary-crate in the workspace folder to run the binary crate. -p specifies package
  • You can run tests with bash cargo test -p some-library-crate as well

Documenting Applications

Documentation with rust is easy and amazing. Documentation comments support markdown.

Docs can be created 2 ways:

  1. rustdoc commands
    • Manually trigger documentation authoring, destination, etc
  2. cargo doc
    • Docs are created using the src/lib.rs and placed in the /target folder
    • You probably want to build docs with cargo doc :)

There are two types of doc line starts:

  1. /// (three slashes)
    • Standard documentation comments
  2. //! (double slash bang)
    • Must come before any element in its scope
    • Used for documenting a crate or jus tinside a struct, fn etc
    • Think of it as a block preample that describes a scope

Good documentation includes:

  1. A short sentence explaining what is happening
  2. A code example that users can copy/paste to try it
  3. Advanced explanations if necessary

Fun fact: Documentation lines are syntactic sugar for compiler directives

Read - https://doc.rust-lang.org/rustdoc/what-is-rustdoc.html

Takeaways

  • Create docs with rustdoc src/lib.rs --crate-name docFolderNameHere
    • src/lib.rs is general the entry point, so we create docs for the sub components by starting here (as the entry point) listed as the first argument in the rustdoc command
  • Certain documentation comments have destrictions for where they can be used:
#![allow(unused)]
fn main() {
//! Inner documentation comment can go here

/// This first line will be used as a sumamry of the 
/// function in the doc index file
pub fn foo() -> i32 {
	//! Inner documentation can go here
	let n = 10;
	{
		//! Inner documentation can go here too
		let n = 15;
	}
	// Double slash is a full comment (ignored by the compiler)
	// We can not place an inner doc comment here because it is 
	// not the first comment in the block scope
}
/// Inner standard documentation comments can go here 
}
  • You can link to documentation pages with [....] like in standard markdown. For example [foo()] or /// [crate::foo()]
  • Documentation can contain test that can be run via rustdoc src/lib.rs --test
    • You can hide lines in test from being printed to the documentation by adding the pound sign /// # this documentation comment is hidden from output but will be compiled
    • Codeblocks for compiled rust in documentation both start and end with /// ``` (three slashes to denote the documentation target and then three back ticks to denote the opening of a code block)
    • Modifiers can be added to opening code block lines to express compilation intent or to change how they're run
      • /// ```ignore - doesn't get compiled
      • /// ```should_panic
      • /// ```no_run - compiled but not run. Useful when documenting api calls
      • /// ```compile_fail
    • The ? will return an error if found. Wrap the test with a function that terminates the error. We can hide the error terminator wrapper with the # symbols.
/// ```
/// use std::io;
/// # fn main() -> io::Result<()>{
/// let mut input = String::new();
/// io::stdin().read_line( &mut input)?;
/// # Ok(())
/// # }
/// ```
  • Compiler annotations can be used in this context to specify playform specific features too, but that's not too important right now. :)

Testing Applications (Unit Tests)

Tests can be run in 2 ways:

  1. In test modules (or as functions)
  2. Documentation tests

Tests can:

  1. Test assertions
  2. Tests panics
  3. Test result Errors

Important distinctions:

  • Libraries (contains lib.rs) - Integration tests should be in the crateRoot/tests directory
  • Binaries without a Library (no lib.rs) - Binaries do not export and can be used with extern crate import syntax. This is why it's recommended to have a lib.rs which exports functionality so that it can be tests and is also consumed by the main.rs binary entry point.

Unit Tests

  • Reading Testing — https://doc.rust-lang.org/1.30.0/book/2018-edition/ch11-00-testing.html
  • Reading Rustdoc tests - htt****ps://doc.rust-lang.org/rustdoc/documentation-tests.html

Takeaways

  • A basic test looks like this
    • A cfg annotation macro that says to only run if in test mode
    • A mod to namespace/group the tests, encapsulating them
    • A test macro that sets up the function as a test
    • The assertion to test
#![allow(unused)]
fn main() {
  #[cfg(test)]
  mod test {
    #[test]
    fn it_works() {
        assert_eq!(2 + 2, 4);
    }
  }
}
  • Writing use super:* will bring the parent scope into the tests module for use
  • Useful macros include. Only testCase, a, and b require arguments. The others provide custom error messaging
#![allow(unused)]
fn main() {
  assert!(testCase,errorMessage, errorMsgTemplateValues...); 
  assert_eq!(a,b,errorMessage, errorMsgTemplateValues...); 
  assert_ne!(a,b,errorMessage, errorMsgTemplateValues...);
}
  • Panic states can be tested by adding the #[should_panic] attribute after #[test] and before the function definition
  • A function that returns a result doesn't require an assertion. The returned result will be tested and the Err returned will make the test fail
#![allow(unused)]
fn main() {
  fn it_works() -> Result<(), String> {
      if 2 + 2 == 4 {
          Ok(())
      } else {
          Err(String::from("two plus two does not equal four"))
      } 
  } 
}
  • tests can be run in parallel by specifying the number of threads. The -- is required to set no value for the name of the test to run. cargo test -- --test-threads=2
  • Examples included in rustdoc doc blocks are run as tests if you run rustdoc --test src/filetotest.rs
#![allow(unused)]
fn main() {
//  (two slashes is a regular comment, three is a documentation comment

/// ```
/// let x = 5;
/// ```

// or 

/// ```should_panic
/// assert!(false);
/// ```
}