The run-time speed and memory usage of programs written in Rust should about the same as of programs written in C, but overall programming style of these languages is different enough that it's hard to generalize their speed. This is a summary of where they're the same, where C is faster, and where Rust is faster.

Disclaimer: It's not meant to be an objective benchmark uncovering indisputable truths about these languages. There's a significant difference between what these languages can achieve in theory, and how they're used in practice. This particular comparison is based on my own subjective experience that includes having deadlines, writing bugs, and being lazy. I've been using Rust as my main language for over 4 years, and C for a decade before that. I'm specifically comparing to just C here, as a comparison with C++ would have many more "ifs" and "buts" that I don't want to get into.

In short:

My overall feeling is that if I could spend infinite time and effort, my C programs would be as fast or faster than Rust, because theoretically there's nothing that C can't do that Rust can. But in practice C has fewer abstractions, primitive standard library, dreadful dependency situation, and I just don't have the time to reinvent the wheel, optimally, every time.

Both are "portable assemblers"

Both Rust and C give control over the layout of data structures, integer sizes, stack vs heap memory allocation, pointer indirections, and generally translate to understandable machine code with little "magic" inserted by the compiler. Rust even admits that bytes have 8 bits and signed integers can overflow!

Even though Rust has higher-level constructs such as iterators, traits and smart pointers, they're designed to predictably optimize to straightforward machine code (AKA "zero-cost abstractions"). Memory layout of Rust's types is simple, e.g. growable strings and vectors are exactly {byte*, capacity, length}. Rust doesn't have any concept like move or copy constructors, so passing of objects is guaranteed to be no more complicated than passing a pointer or memcpy.

Borrow-checking is only a compile-time static analysis. It doesn't do anything, and lifetime information is even completely stripped out before code generation. There's no autoboxing or anything clever like that.

One case where Rust falls short of being "dumb" code generator is unwinding. While Rust doesn't use exceptions for normal error handling, a panic (unhandled fatal error) may optionally behave like a C++ exception. It can be disabled at compilation time (panic = abort), but even then Rust doesn't like to be mixed with C++ exceptions or longjmp.

Same old LLVM back-end

Rust has a good integration with LLVM, so it supports Link-Time Optimization, including ThinLTO and even inlining across C/C++/Rust language boundaries. There's profile-guided optimization, too. Even though rustc generates more verbose LLVM IR than clang, the optimizer can still deal with it pretty well.

Some of my C code is a bit faster when compiled with GCC than LLVM, and there's no Rust front-end for GCC yet, so Rust misses out on that.

In theory, Rust allows even better optimizations than C thanks to stricter immutability and aliasing rules, but in practice this doesn't happen yet. Optimizations beyond what C does are a work-in-progress in LLVM, so Rust still hasn't reached its full potential.

Both allow hand-tuning, with minor exceptions

Rust code is low-level and predictable enough that I can hand-tune what assembly it will optimize to. Rust supports SIMD intrinsics, has good control over inlining, calling conventions, etc. Rust is similar enough to C that C profilers usually work with Rust out of the box (e.g. I can use Xcode's Instruments on a program that's a Rust-C-Swift sandwich).

In general, where the performance is absolutely critical and needs to be hand-optimized to the last bit, optimizing Rust isn't much different from C.

There are some low-level features that Rust doesn't have a proper replacement for:

It's worth noting that Rust currently supports only one 16-bit architecture. The tier 1 support is focused on 32-bit and 64-bit platforms.

Small overheads of Rust

However, where Rust isn't hand-tuned, some inefficiencies can creep in:

Executable sizes

Every operating system ships some built-in standard C library that is ~30MB of code that C executables get for "free", e.g. a small "Hello World" C executable can't actually print anything, it only calls the printf shipped with the OS. Rust can't count on OSes having Rust's standard library built-in, so Rust executables bundle their own standard library (300KB or more). Fortunately, it's a one-time overhead and can be reduced. For embedded development, the standard library can be turned off and Rust will generate "bare" code.

On per-function basis Rust code is about the same size as C, but there's a problem of "generics bloat". Generic functions get optimized versions for each type they're used with, so it's possible to end up with 8 versions of the same function. cargo-bloat helps finding these.

It's super easy to use dependencies in Rust. Similarly to JS/npm, there's a culture of making small single-purpose libraries, but they do add up. Eventually all my executables end up containing Unicode normalization tables, 7 different random number generators, and an HTTP/2 client with Brotli support. cargo-tree is useful for deduping and culling them.

Small wins for Rust

I've talked a lot about overheads, but Rust also has places where it ends up more efficient and faster:

Big win for Rust

Rust enforces thread-safety of all code and data, even in 3rd party libraries, even if authors of that code didn't pay attention to thread safety. Everything either upholds specific thread-safety guarantees, or won't be allowed to be used across threads. If I write any code that is not thread safe, the compiler will point out exactly where it is unsafe.

That's a dramatically different situation than C. Usually no library functions can be trusted to be thread-safe unless they're clearly documented otherwise. It's up to the programmer to ensure all of the code is correct, and the compiler generally can't help with any of this. Multi-threaded C code carries a lot more responsibility, a lot more risk, so it's appealing to pretend multi-core CPUs are just a fad, and imagine users have better things to do with the remaining 7 or 15 cores.

Rust guarantees freedom from data races and memory unsafety (e.g. use-after-free bugs, even across threads). Not just some races that could be found with heuristics or at runtime in instrumented builds, but all data races everywhere. This is life-saving, because data races are the worst kind of concurrency bugs. They'll happen on my users' machines, but not in my debugger. There are other kinds of concurrency bugs, such as poor use of locking primitives causing higher-level logical race conditions or deadlocks, and Rust can't eliminate them, but they're usually easier to reproduce and fix.

In C I won't dare to do more than a couple of OpenMP pragmas on simple for loops. I've tried being more adventurous with tasks and threads, and ended up regretting it every time.

Rust has has good libraries for data parallelism, thread pools, queues, tasks, lock-free data structures, etc. With the help of such building blocks, and the strong safety net of the type system, I can parallelize Rust programs quite easily. In some cases it's sufficient to replace iter() with par_iter(), and if it compiles, it works! It's not always a linear speed-up (Amdahl's law is brutal), but it's often a 2×-3× speed-up for relatively little work.

There's an interesting difference how Rust and C libraries document thread-safety. Rust has a vocabulary for specific aspects of thread-safety, such as Send and Sync, guards and cells. In C, there's no word for "you can allocate it on one thread, and free it on another thread, but you can't use it from two threads at once". Rust describes thread-safety in terms of data types, which generalizes to all functions using them. In C thread-safety is talked about in the context of individual functions and configuration flags. Rust's guarantees tend to be compile-time, or at least unconditional. In C it's common to find "this is thread safe only when the turboblub option is set to 7".

To sum it up

Rust is low-level enough that if necessary, it can be optimized for maximum performance just as well as C. Higher-level abstractions, easy memory management, and abundance of available libraries tend to make Rust programs have more code, do more, and if left unchecked, can add up to bloat. However, Rust programs also optimize quite well, sometimes better than C. While C is good for writing minimal code on byte-by-byte pointer-by-pointer level, Rust has powerful features for efficiently combining multiple functions or even whole libraries together.

But the biggest potential is in ability to fearlessly parallelize majority of Rust code, even when the equivalent C code would be too risky to parallelize. In this aspect Rust is a much more mature language than C.