This crate provides the quote!
macro for turning Rust syntax tree data structures into tokens of source code.
Procedural macros in Rust receive a stream of tokens as input, execute arbitrary Rust code to determine how to manipulate those tokens, and produce a stream of tokens to hand back to the compiler to compile into the caller's crate. Quasi-quoting is a solution to one piece of that — producing tokens to return to the compiler.
The idea of quasi-quoting is that we write code that we treat as data. Within the quote!
macro, we can write what looks like code to our text editor or IDE. We get all the benefits of the editor‘s brace matching, syntax highlighting, indentation, and maybe autocompletion. But rather than compiling that as code into the current crate, we can treat it as data, pass it around, mutate it, and eventually hand it back to the compiler as tokens to compile into the macro caller’s crate.
This crate is motivated by the procedural macro use case, but is a general-purpose Rust quasi-quoting library and is not specific to procedural macros.
[dependencies] quote = "1.0"
Version requirement: Quote supports rustc 1.31 and up.
Release notes
The quote crate provides a quote!
macro within which you can write Rust code that gets packaged into a TokenStream
and can be treated as data. You should think of TokenStream
as representing a fragment of Rust source code.
Within the quote!
macro, interpolation is done with #var
. Any type implementing the quote::ToTokens
trait can be interpolated. This includes most Rust primitive types as well as most of the syntax tree types from syn
.
let tokens = quote! { struct SerializeWith #generics #where_clause { value: &'a #field_ty, phantom: core::marker::PhantomData<#item_ty>, } impl #generics serde::Serialize for SerializeWith #generics #where_clause { fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> where S: serde::Serializer, { #path(self.value, serializer) } } SerializeWith { value: #value, phantom: core::marker::PhantomData::<#item_ty>, } };
Repetition is done using #(...)*
or #(...),*
similar to macro_rules!
. This iterates through the elements of any variable interpolated within the repetition and inserts a copy of the repetition body for each one. The variables in an interpolation may be anything that implements IntoIterator
, including Vec
or a pre-existing iterator.
#(#var)*
— no separators#(#var),*
— the character before the asterisk is used as a separator#( struct #var; )*
— the repetition can contain other things#( #k => println!("{}", #v), )*
— even multiple interpolationsNote that there is a difference between #(#var ,)*
and #(#var),*
—the latter does not produce a trailing comma. This matches the behavior of delimiters in macro_rules!
.
The quote!
macro evaluates to an expression of type proc_macro2::TokenStream
. Meanwhile Rust procedural macros are expected to return the type proc_macro::TokenStream
.
The difference between the two types is that proc_macro
types are entirely specific to procedural macros and cannot ever exist in code outside of a procedural macro, while proc_macro2
types may exist anywhere including tests and non-macro code like main.rs and build.rs. This is why even the procedural macro ecosystem is largely built around proc_macro2
, because that ensures the libraries are unit testable and accessible in non-macro contexts.
There is a From
-conversion in both directions so returning the output of quote!
from a procedural macro usually looks like tokens.into()
or proc_macro::TokenStream::from(tokens)
.
Usually you don't end up constructing an entire final TokenStream
in one piece. Different parts may come from different helper functions. The tokens produced by quote!
themselves implement ToTokens
and so can be interpolated into later quote!
invocations to build up a final result.
let type_definition = quote! {...}; let methods = quote! {...}; let tokens = quote! { #type_definition #methods };
Suppose we have an identifier ident
which came from somewhere in a macro input and we need to modify it in some way for the macro output. Let's consider prepending the identifier with an underscore.
Simply interpolating the identifier next to an underscore will not have the behavior of concatenating them. The underscore and the identifier will continue to be two separate tokens as if you had written _ x
.
// incorrect quote! { let mut _#ident = 0; }
The solution is to build a new identifier token with the correct value. As this is such a common case, the format_ident!
macro provides a convenient utility for doing so correctly.
let varname = format_ident!("_{}", ident); quote! { let mut #varname = 0; }
Alternatively, the APIs provided by Syn and proc-macro2 can be used to directly build the identifier. This is roughly equivalent to the above, but will not handle ident
being a raw identifier.
let concatenated = format!("_{}", ident); let varname = syn::Ident::new(&concatenated, ident.span()); quote! { let mut #varname = 0; }
Let's say our macro requires some type specified in the macro input to have a constructor called new
. We have the type in a variable called field_type
of type syn::Type
and want to invoke the constructor.
// incorrect quote! { let value = #field_type::new(); }
This works only sometimes. If field_type
is String
, the expanded code contains String::new()
which is fine. But if field_type
is something like Vec<i32>
then the expanded code is Vec<i32>::new()
which is invalid syntax. Ordinarily in handwritten Rust we would write Vec::<i32>::new()
but for macros often the following is more convenient.
quote! { let value = <#field_type>::new(); }
This expands to <Vec<i32>>::new()
which behaves correctly.
A similar pattern is appropriate for trait methods.
quote! { let value = <#field_type as core::default::Default>::default(); }
Any interpolated tokens preserve the Span
information provided by their ToTokens
implementation. Tokens that originate within a quote!
invocation are spanned with Span::call_site()
.
A different span can be provided explicitly through the quote_spanned!
macro.
When using quote
in a build.rs or main.rs and writing the output out to a file, consider having the code generator pass the tokens through rustfmt before writing (either by shelling out to the rustfmt
binary or by pulling in the rustfmt
library as a dependency). This way if an error occurs in the generated code it is convenient for a human to read and debug.
Be aware that no kind of hygiene or span information is retained when tokens are written to a file; the conversion from tokens to source code is lossy.