Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit 6092708

Browse files
ojedajannau
authored andcommittedMay 22, 2024
*RFL import: rust: upgrade to Rust 1.76.0
This is the next upgrade to the Rust toolchain, from 1.75.0 to 1.76.0 (i.e. the latest) [1]. See the upgrade policy [2] and the comments on the first upgrade in commit 3ed03f4 ("rust: upgrade to Rust 1.68.2"). No unstable features that we use were stabilized in Rust 1.76.0. The only unstable features allowed to be used outside the `kernel` crate are still `new_uninit,offset_of`, though other code to be upstreamed may increase the list. Please see [3] for details. `rustc` (and others) now warns when it cannot connect to the Make jobserver, thus mark those invocations as recursive as needed. Please see the previous commit for details. Rust 1.76.0 does not emit the `.debug_pub{names,types}` sections anymore for DWARFv4 [4][5]. For instance, in the uncompressed debug info case, this debug information took: samples/rust/rust_minimal.o ~64 KiB (~18% of total object size) rust/kernel.o ~92 KiB (~15%) rust/core.o ~114 KiB ( ~5%) In the compressed debug info (zlib) case: samples/rust/rust_minimal.o ~11 KiB (~6%) rust/kernel.o ~17 KiB (~5%) rust/core.o ~21 KiB (~1.5%) In addition, the `rustc_codegen_gcc` backend now does not emit the `.eh_frame` section when compiling under `-Cpanic=abort` [6], thus removing the need for the patch in the CI to compile the kernel [7]. Moreover, it also now emits the `.comment` section too [6]. The vast majority of changes are due to our `alloc` fork being upgraded at once. There are two kinds of changes to be aware of: the ones coming from upstream, which we should follow as closely as possible, and the updates needed in our added fallible APIs to keep them matching the newer infallible APIs coming from upstream. Instead of taking a look at the diff of this patch, an alternative approach is reviewing a diff of the changes between upstream `alloc` and the kernel's. This allows to easily inspect the kernel additions only, especially to check if the fallible methods we already have still match the infallible ones in the new version coming from upstream. Another approach is reviewing the changes introduced in the additions in the kernel fork between the two versions. This is useful to spot potentially unintended changes to our additions. To apply these approaches, one may follow steps similar to the following to generate a pair of patches that show the differences between upstream Rust and the kernel (for the subset of `alloc` we use) before and after applying this patch: # Get the difference with respect to the old version. git -C rust checkout $(linux/scripts/min-tool-version.sh rustc) git -C linux ls-tree -r --name-only HEAD -- rust/alloc | cut -d/ -f3- | grep -Fv README.md | xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH git -C linux diff --patch-with-stat --summary -R > old.patch git -C linux restore rust/alloc # Apply this patch. git -C linux am rust-upgrade.patch # Get the difference with respect to the new version. git -C rust checkout $(linux/scripts/min-tool-version.sh rustc) git -C linux ls-tree -r --name-only HEAD -- rust/alloc | cut -d/ -f3- | grep -Fv README.md | xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH git -C linux diff --patch-with-stat --summary -R > new.patch git -C linux restore rust/alloc Now one may check the `new.patch` to take a look at the additions (first approach) or at the difference between those two patches (second approach). For the latter, a side-by-side tool is recommended. Link: https://github.com/rust-lang/rust/blob/stable/RELEASES.md#version-1760-2024-02-08 [1] Link: https://rust-for-linux.com/rust-version-policy [2] Link: Rust-for-Linux#2 [3] Link: rust-lang/compiler-team#688 [4] Link: rust-lang/rust#117962 [5] Link: rust-lang/rust#118068 [6] Link: https://github.com/Rust-for-Linux/ci-rustc_codegen_gcc [7] Tested-by: Boqun Feng <boqun.feng@gmail.com> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Link: https://lore.kernel.org/r/20240217002638.57373-2-ojeda@kernel.org Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
1 parent f1e6a71 commit 6092708

File tree

9 files changed

+125
-44
lines changed

9 files changed

+125
-44
lines changed
 

‎Documentation/process/changes.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ you probably needn't concern yourself with pcmciautils.
3131
====================== =============== ========================================
3232
GNU C 5.1 gcc --version
3333
Clang/LLVM (optional) 11.0.0 clang --version
34-
Rust (optional) 1.75.0 rustc --version
34+
Rust (optional) 1.76.0 rustc --version
3535
bindgen (optional) 0.65.1 bindgen --version
3636
GNU make 3.82 make --version
3737
bash 4.2 bash --version

‎rust/alloc/alloc.rs

+3
Original file line numberDiff line numberDiff line change
@@ -426,12 +426,14 @@ pub mod __alloc_error_handler {
426426
}
427427
}
428428

429+
#[cfg(not(no_global_oom_handling))]
429430
/// Specialize clones into pre-allocated, uninitialized memory.
430431
/// Used by `Box::clone` and `Rc`/`Arc::make_mut`.
431432
pub(crate) trait WriteCloneIntoRaw: Sized {
432433
unsafe fn write_clone_into_raw(&self, target: *mut Self);
433434
}
434435

436+
#[cfg(not(no_global_oom_handling))]
435437
impl<T: Clone> WriteCloneIntoRaw for T {
436438
#[inline]
437439
default unsafe fn write_clone_into_raw(&self, target: *mut Self) {
@@ -441,6 +443,7 @@ impl<T: Clone> WriteCloneIntoRaw for T {
441443
}
442444
}
443445

446+
#[cfg(not(no_global_oom_handling))]
444447
impl<T: Copy> WriteCloneIntoRaw for T {
445448
#[inline]
446449
unsafe fn write_clone_into_raw(&self, target: *mut Self) {

‎rust/alloc/boxed.rs

+11-3
Original file line numberDiff line numberDiff line change
@@ -1042,10 +1042,18 @@ impl<T: ?Sized, A: Allocator> Box<T, A> {
10421042
/// use std::ptr;
10431043
///
10441044
/// let x = Box::new(String::from("Hello"));
1045-
/// let p = Box::into_raw(x);
1045+
/// let ptr = Box::into_raw(x);
1046+
/// unsafe {
1047+
/// ptr::drop_in_place(ptr);
1048+
/// dealloc(ptr as *mut u8, Layout::new::<String>());
1049+
/// }
1050+
/// ```
1051+
/// Note: This is equivalent to the following:
1052+
/// ```
1053+
/// let x = Box::new(String::from("Hello"));
1054+
/// let ptr = Box::into_raw(x);
10461055
/// unsafe {
1047-
/// ptr::drop_in_place(p);
1048-
/// dealloc(p as *mut u8, Layout::new::<String>());
1056+
/// drop(Box::from_raw(ptr));
10491057
/// }
10501058
/// ```
10511059
///

‎rust/alloc/collections/mod.rs

+1
Original file line numberDiff line numberDiff line change
@@ -150,6 +150,7 @@ impl Display for TryReserveError {
150150

151151
/// An intermediate trait for specialization of `Extend`.
152152
#[doc(hidden)]
153+
#[cfg(not(no_global_oom_handling))]
153154
trait SpecExtend<I: IntoIterator> {
154155
/// Extends `self` with the contents of the given iterator.
155156
fn spec_extend(&mut self, iter: I);

‎rust/alloc/lib.rs

+4-4
Original file line numberDiff line numberDiff line change
@@ -81,8 +81,8 @@
8181
not(no_sync),
8282
target_has_atomic = "ptr"
8383
))]
84-
#![cfg_attr(not(bootstrap), doc(rust_logo))]
85-
#![cfg_attr(not(bootstrap), feature(rustdoc_internals))]
84+
#![doc(rust_logo)]
85+
#![feature(rustdoc_internals)]
8686
#![no_std]
8787
#![needs_allocator]
8888
// Lints:
@@ -144,7 +144,6 @@
144144
#![feature(maybe_uninit_uninit_array)]
145145
#![feature(maybe_uninit_uninit_array_transpose)]
146146
#![feature(pattern)]
147-
#![feature(ptr_addr_eq)]
148147
#![feature(ptr_internals)]
149148
#![feature(ptr_metadata)]
150149
#![feature(ptr_sub_ptr)]
@@ -159,6 +158,7 @@
159158
#![feature(std_internals)]
160159
#![feature(str_internals)]
161160
#![feature(strict_provenance)]
161+
#![feature(trusted_fused)]
162162
#![feature(trusted_len)]
163163
#![feature(trusted_random_access)]
164164
#![feature(try_trait_v2)]
@@ -278,7 +278,7 @@ pub(crate) mod test_helpers {
278278
/// seed not being the same for every RNG invocation too.
279279
pub(crate) fn test_rng() -> rand_xorshift::XorShiftRng {
280280
use std::hash::{BuildHasher, Hash, Hasher};
281-
let mut hasher = std::collections::hash_map::RandomState::new().build_hasher();
281+
let mut hasher = std::hash::RandomState::new().build_hasher();
282282
std::panic::Location::caller().hash(&mut hasher);
283283
let hc64 = hasher.finish();
284284
let seed_vec =

‎rust/alloc/raw_vec.rs

+42-16
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,16 @@ enum AllocInit {
2727
Zeroed,
2828
}
2929

30+
#[repr(transparent)]
31+
#[cfg_attr(target_pointer_width = "16", rustc_layout_scalar_valid_range_end(0x7fff))]
32+
#[cfg_attr(target_pointer_width = "32", rustc_layout_scalar_valid_range_end(0x7fff_ffff))]
33+
#[cfg_attr(target_pointer_width = "64", rustc_layout_scalar_valid_range_end(0x7fff_ffff_ffff_ffff))]
34+
struct Cap(usize);
35+
36+
impl Cap {
37+
const ZERO: Cap = unsafe { Cap(0) };
38+
}
39+
3040
/// A low-level utility for more ergonomically allocating, reallocating, and deallocating
3141
/// a buffer of memory on the heap without having to worry about all the corner cases
3242
/// involved. This type is excellent for building your own data structures like Vec and VecDeque.
@@ -52,7 +62,12 @@ enum AllocInit {
5262
#[allow(missing_debug_implementations)]
5363
pub(crate) struct RawVec<T, A: Allocator = Global> {
5464
ptr: Unique<T>,
55-
cap: usize,
65+
/// Never used for ZSTs; it's `capacity()`'s responsibility to return usize::MAX in that case.
66+
///
67+
/// # Safety
68+
///
69+
/// `cap` must be in the `0..=isize::MAX` range.
70+
cap: Cap,
5671
alloc: A,
5772
}
5873

@@ -121,7 +136,7 @@ impl<T, A: Allocator> RawVec<T, A> {
121136
/// the returned `RawVec`.
122137
pub const fn new_in(alloc: A) -> Self {
123138
// `cap: 0` means "unallocated". zero-sized types are ignored.
124-
Self { ptr: Unique::dangling(), cap: 0, alloc }
139+
Self { ptr: Unique::dangling(), cap: Cap::ZERO, alloc }
125140
}
126141

127142
/// Like `with_capacity`, but parameterized over the choice of
@@ -203,7 +218,7 @@ impl<T, A: Allocator> RawVec<T, A> {
203218
// here should change to `ptr.len() / mem::size_of::<T>()`.
204219
Self {
205220
ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) },
206-
cap: capacity,
221+
cap: unsafe { Cap(capacity) },
207222
alloc,
208223
}
209224
}
@@ -228,7 +243,7 @@ impl<T, A: Allocator> RawVec<T, A> {
228243
// here should change to `ptr.len() / mem::size_of::<T>()`.
229244
Ok(Self {
230245
ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) },
231-
cap: capacity,
246+
cap: unsafe { Cap(capacity) },
232247
alloc,
233248
})
234249
}
@@ -240,12 +255,13 @@ impl<T, A: Allocator> RawVec<T, A> {
240255
/// The `ptr` must be allocated (via the given allocator `alloc`), and with the given
241256
/// `capacity`.
242257
/// The `capacity` cannot exceed `isize::MAX` for sized types. (only a concern on 32-bit
243-
/// systems). ZST vectors may have a capacity up to `usize::MAX`.
258+
/// systems). For ZSTs capacity is ignored.
244259
/// If the `ptr` and `capacity` come from a `RawVec` created via `alloc`, then this is
245260
/// guaranteed.
246261
#[inline]
247262
pub unsafe fn from_raw_parts_in(ptr: *mut T, capacity: usize, alloc: A) -> Self {
248-
Self { ptr: unsafe { Unique::new_unchecked(ptr) }, cap: capacity, alloc }
263+
let cap = if T::IS_ZST { Cap::ZERO } else { unsafe { Cap(capacity) } };
264+
Self { ptr: unsafe { Unique::new_unchecked(ptr) }, cap, alloc }
249265
}
250266

251267
/// Gets a raw pointer to the start of the allocation. Note that this is
@@ -261,7 +277,7 @@ impl<T, A: Allocator> RawVec<T, A> {
261277
/// This will always be `usize::MAX` if `T` is zero-sized.
262278
#[inline(always)]
263279
pub fn capacity(&self) -> usize {
264-
if T::IS_ZST { usize::MAX } else { self.cap }
280+
if T::IS_ZST { usize::MAX } else { self.cap.0 }
265281
}
266282

267283
/// Returns a shared reference to the allocator backing this `RawVec`.
@@ -270,7 +286,7 @@ impl<T, A: Allocator> RawVec<T, A> {
270286
}
271287

272288
fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> {
273-
if T::IS_ZST || self.cap == 0 {
289+
if T::IS_ZST || self.cap.0 == 0 {
274290
None
275291
} else {
276292
// We could use Layout::array here which ensures the absence of isize and usize overflows
@@ -280,7 +296,7 @@ impl<T, A: Allocator> RawVec<T, A> {
280296
let _: () = const { assert!(mem::size_of::<T>() % mem::align_of::<T>() == 0) };
281297
unsafe {
282298
let align = mem::align_of::<T>();
283-
let size = mem::size_of::<T>().unchecked_mul(self.cap);
299+
let size = mem::size_of::<T>().unchecked_mul(self.cap.0);
284300
let layout = Layout::from_size_align_unchecked(size, align);
285301
Some((self.ptr.cast().into(), layout))
286302
}
@@ -414,12 +430,15 @@ impl<T, A: Allocator> RawVec<T, A> {
414430
additional > self.capacity().wrapping_sub(len)
415431
}
416432

417-
fn set_ptr_and_cap(&mut self, ptr: NonNull<[u8]>, cap: usize) {
433+
/// # Safety:
434+
///
435+
/// `cap` must not exceed `isize::MAX`.
436+
unsafe fn set_ptr_and_cap(&mut self, ptr: NonNull<[u8]>, cap: usize) {
418437
// Allocators currently return a `NonNull<[u8]>` whose length matches
419438
// the size requested. If that ever changes, the capacity here should
420439
// change to `ptr.len() / mem::size_of::<T>()`.
421440
self.ptr = unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) };
422-
self.cap = cap;
441+
self.cap = unsafe { Cap(cap) };
423442
}
424443

425444
// This method is usually instantiated many times. So we want it to be as
@@ -444,14 +463,15 @@ impl<T, A: Allocator> RawVec<T, A> {
444463

445464
// This guarantees exponential growth. The doubling cannot overflow
446465
// because `cap <= isize::MAX` and the type of `cap` is `usize`.
447-
let cap = cmp::max(self.cap * 2, required_cap);
466+
let cap = cmp::max(self.cap.0 * 2, required_cap);
448467
let cap = cmp::max(Self::MIN_NON_ZERO_CAP, cap);
449468

450469
let new_layout = Layout::array::<T>(cap);
451470

452471
// `finish_grow` is non-generic over `T`.
453472
let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?;
454-
self.set_ptr_and_cap(ptr, cap);
473+
// SAFETY: finish_grow would have resulted in a capacity overflow if we tried to allocate more than isize::MAX items
474+
unsafe { self.set_ptr_and_cap(ptr, cap) };
455475
Ok(())
456476
}
457477

@@ -470,7 +490,10 @@ impl<T, A: Allocator> RawVec<T, A> {
470490

471491
// `finish_grow` is non-generic over `T`.
472492
let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?;
473-
self.set_ptr_and_cap(ptr, cap);
493+
// SAFETY: finish_grow would have resulted in a capacity overflow if we tried to allocate more than isize::MAX items
494+
unsafe {
495+
self.set_ptr_and_cap(ptr, cap);
496+
}
474497
Ok(())
475498
}
476499

@@ -488,7 +511,7 @@ impl<T, A: Allocator> RawVec<T, A> {
488511
if cap == 0 {
489512
unsafe { self.alloc.deallocate(ptr, layout) };
490513
self.ptr = Unique::dangling();
491-
self.cap = 0;
514+
self.cap = Cap::ZERO;
492515
} else {
493516
let ptr = unsafe {
494517
// `Layout::array` cannot overflow here because it would have
@@ -499,7 +522,10 @@ impl<T, A: Allocator> RawVec<T, A> {
499522
.shrink(ptr, layout, new_layout)
500523
.map_err(|_| AllocError { layout: new_layout, non_exhaustive: () })?
501524
};
502-
self.set_ptr_and_cap(ptr, cap);
525+
// SAFETY: if the allocation is valid, then the capacity is too
526+
unsafe {
527+
self.set_ptr_and_cap(ptr, cap);
528+
}
503529
}
504530
Ok(())
505531
}

‎rust/alloc/vec/into_iter.rs

+11-5
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,8 @@ use crate::raw_vec::RawVec;
99
use core::array;
1010
use core::fmt;
1111
use core::iter::{
12-
FusedIterator, InPlaceIterable, SourceIter, TrustedLen, TrustedRandomAccessNoCoerce,
12+
FusedIterator, InPlaceIterable, SourceIter, TrustedFused, TrustedLen,
13+
TrustedRandomAccessNoCoerce,
1314
};
1415
use core::marker::PhantomData;
1516
use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties};
@@ -287,9 +288,7 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> {
287288
// Also note the implementation of `Self: TrustedRandomAccess` requires
288289
// that `T: Copy` so reading elements from the buffer doesn't invalidate
289290
// them for `Drop`.
290-
unsafe {
291-
if T::IS_ZST { mem::zeroed() } else { ptr::read(self.ptr.add(i)) }
292-
}
291+
unsafe { if T::IS_ZST { mem::zeroed() } else { ptr::read(self.ptr.add(i)) } }
293292
}
294293
}
295294

@@ -341,6 +340,10 @@ impl<T, A: Allocator> ExactSizeIterator for IntoIter<T, A> {
341340
#[stable(feature = "fused", since = "1.26.0")]
342341
impl<T, A: Allocator> FusedIterator for IntoIter<T, A> {}
343342

343+
#[doc(hidden)]
344+
#[unstable(issue = "none", feature = "trusted_fused")]
345+
unsafe impl<T, A: Allocator> TrustedFused for IntoIter<T, A> {}
346+
344347
#[unstable(feature = "trusted_len", issue = "37572")]
345348
unsafe impl<T, A: Allocator> TrustedLen for IntoIter<T, A> {}
346349

@@ -425,7 +428,10 @@ unsafe impl<#[may_dangle] T, A: Allocator> Drop for IntoIter<T, A> {
425428
// also refer to the vec::in_place_collect module documentation to get an overview
426429
#[unstable(issue = "none", feature = "inplace_iteration")]
427430
#[doc(hidden)]
428-
unsafe impl<T, A: Allocator> InPlaceIterable for IntoIter<T, A> {}
431+
unsafe impl<T, A: Allocator> InPlaceIterable for IntoIter<T, A> {
432+
const EXPAND_BY: Option<NonZeroUsize> = NonZeroUsize::new(1);
433+
const MERGE_BY: Option<NonZeroUsize> = NonZeroUsize::new(1);
434+
}
429435

430436
#[unstable(issue = "none", feature = "inplace_iteration")]
431437
#[doc(hidden)]
There was a problem loading the remainder of the diff.

0 commit comments

Comments
 (0)
Failed to load comments.