<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-stable.git/arch/x86/include, branch linux-3.4.y</title>
<subtitle>Linux kernel stable tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/'/>
<entry>
<title>x86/iopl/64: Properly context-switch IOPL on Xen PV</title>
<updated>2016-04-27T10:55:30+00:00</updated>
<author>
<name>Andy Lutomirski</name>
<email>luto@kernel.org</email>
</author>
<published>2016-03-16T21:14:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=97e0c9082179f7abe08d6921d3594749e2431542'/>
<id>97e0c9082179f7abe08d6921d3594749e2431542</id>
<content type='text'>
commit b7a584598aea7ca73140cb87b40319944dd3393f upstream.

On Xen PV, regs-&gt;flags doesn't reliably reflect IOPL and the
exit-to-userspace code doesn't change IOPL.  We need to context
switch it manually.

I'm doing this without going through paravirt because this is
specific to Xen PV.  After the dust settles, we can merge this with
the 32-bit code, tidy up the iopl syscall implementation, and remove
the set_iopl pvop entirely.

Fixes XSA-171.

Reviewewd-by: Jan Beulich &lt;JBeulich@suse.com&gt;
Signed-off-by: Andy Lutomirski &lt;luto@kernel.org&gt;
Cc: Andrew Cooper &lt;andrew.cooper3@citrix.com&gt;
Cc: Andy Lutomirski &lt;luto@amacapital.net&gt;
Cc: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Cc: Borislav Petkov &lt;bp@alien8.de&gt;
Cc: Brian Gerst &lt;brgerst@gmail.com&gt;
Cc: David Vrabel &lt;david.vrabel@citrix.com&gt;
Cc: Denys Vlasenko &lt;dvlasenk@redhat.com&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Jan Beulich &lt;JBeulich@suse.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Link: http://lkml.kernel.org/r/693c3bd7aeb4d3c27c92c622b7d0f554a458173c.1458162709.git.luto@kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
[ kamal: backport to 3.19-stable: no X86_FEATURE_XENPV so just call
  xen_pv_domain() directly ]
Acked-by: Andy Lutomirski &lt;luto &lt;at&gt; kernel.org&gt;
Signed-off-by: Kamal Mostafa &lt;kamal &lt;at&gt; canonical.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit b7a584598aea7ca73140cb87b40319944dd3393f upstream.

On Xen PV, regs-&gt;flags doesn't reliably reflect IOPL and the
exit-to-userspace code doesn't change IOPL.  We need to context
switch it manually.

I'm doing this without going through paravirt because this is
specific to Xen PV.  After the dust settles, we can merge this with
the 32-bit code, tidy up the iopl syscall implementation, and remove
the set_iopl pvop entirely.

Fixes XSA-171.

Reviewewd-by: Jan Beulich &lt;JBeulich@suse.com&gt;
Signed-off-by: Andy Lutomirski &lt;luto@kernel.org&gt;
Cc: Andrew Cooper &lt;andrew.cooper3@citrix.com&gt;
Cc: Andy Lutomirski &lt;luto@amacapital.net&gt;
Cc: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Cc: Borislav Petkov &lt;bp@alien8.de&gt;
Cc: Brian Gerst &lt;brgerst@gmail.com&gt;
Cc: David Vrabel &lt;david.vrabel@citrix.com&gt;
Cc: Denys Vlasenko &lt;dvlasenk@redhat.com&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Jan Beulich &lt;JBeulich@suse.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Link: http://lkml.kernel.org/r/693c3bd7aeb4d3c27c92c622b7d0f554a458173c.1458162709.git.luto@kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
[ kamal: backport to 3.19-stable: no X86_FEATURE_XENPV so just call
  xen_pv_domain() directly ]
Acked-by: Andy Lutomirski &lt;luto &lt;at&gt; kernel.org&gt;
Signed-off-by: Kamal Mostafa &lt;kamal &lt;at&gt; canonical.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: x86: trap AMD MSRs for the TSeg base and mask</title>
<updated>2016-04-27T10:55:23+00:00</updated>
<author>
<name>Paolo Bonzini</name>
<email>pbonzini@redhat.com</email>
</author>
<published>2015-09-18T15:33:04+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=7d148ce4451a12ca1b226373ccda950718154140'/>
<id>7d148ce4451a12ca1b226373ccda950718154140</id>
<content type='text'>
commit 3afb1121800128aae9f5722e50097fcf1a9d4d88 upstream.

These have roughly the same purpose as the SMRR, which we do not need
to implement in KVM.  However, Linux accesses MSR_K8_TSEG_ADDR at
boot, which causes problems when running a Xen dom0 under KVM.
Just return 0, meaning that processor protection of SMRAM is not
in effect.

Reported-by: M A Young &lt;m.a.young@durham.ac.uk&gt;
Acked-by: Borislav Petkov &lt;bp@suse.de&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 3afb1121800128aae9f5722e50097fcf1a9d4d88 upstream.

These have roughly the same purpose as the SMRR, which we do not need
to implement in KVM.  However, Linux accesses MSR_K8_TSEG_ADDR at
boot, which causes problems when running a Xen dom0 under KVM.
Just return 0, meaning that processor protection of SMRAM is not
in effect.

Reported-by: M A Young &lt;m.a.young@durham.ac.uk&gt;
Acked-by: Borislav Petkov &lt;bp@suse.de&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: x86: work around infinite loop in microcode when #AC is delivered</title>
<updated>2016-03-21T01:17:55+00:00</updated>
<author>
<name>Eric Northup</name>
<email>digitaleric@google.com</email>
</author>
<published>2015-11-03T17:03:53+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=e1b6c3c95cbd3bb5947277e45cf4b7527020af11'/>
<id>e1b6c3c95cbd3bb5947277e45cf4b7527020af11</id>
<content type='text'>
commit 54a20552e1eae07aa240fa370a0293e006b5faed upstream.

It was found that a guest can DoS a host by triggering an infinite
stream of "alignment check" (#AC) exceptions.  This causes the
microcode to enter an infinite loop where the core never receives
another interrupt.  The host kernel panics pretty quickly due to the
effects (CVE-2015-5307).

Signed-off-by: Eric Northup &lt;digitaleric@google.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
[lizf: Backported to 3.4:
 - adjust filename
 - adjust context
 - add definition of AC_VECTOR]
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 54a20552e1eae07aa240fa370a0293e006b5faed upstream.

It was found that a guest can DoS a host by triggering an infinite
stream of "alignment check" (#AC) exceptions.  This causes the
microcode to enter an infinite loop where the core never receives
another interrupt.  The host kernel panics pretty quickly due to the
effects (CVE-2015-5307).

Signed-off-by: Eric Northup &lt;digitaleric@google.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
[lizf: Backported to 3.4:
 - adjust filename
 - adjust context
 - add definition of AC_VECTOR]
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>x86/ldt: Make modify_ldt synchronous</title>
<updated>2016-03-21T01:17:50+00:00</updated>
<author>
<name>Andy Lutomirski</name>
<email>luto@kernel.org</email>
</author>
<published>2015-07-30T21:31:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=0444d9dfacb4fa42005ca3afefe56b16f07c5192'/>
<id>0444d9dfacb4fa42005ca3afefe56b16f07c5192</id>
<content type='text'>
commit 37868fe113ff2ba814b3b4eb12df214df555f8dc upstream.

modify_ldt() has questionable locking and does not synchronize
threads.  Improve it: redesign the locking and synchronize all
threads' LDTs using an IPI on all modifications.

This will dramatically slow down modify_ldt in multithreaded
programs, but there shouldn't be any multithreaded programs that
care about modify_ldt's performance in the first place.

This fixes some fallout from the CVE-2015-5157 fixes.

Signed-off-by: Andy Lutomirski &lt;luto@kernel.org&gt;
Reviewed-by: Borislav Petkov &lt;bp@suse.de&gt;
Cc: Andrew Cooper &lt;andrew.cooper3@citrix.com&gt;
Cc: Andy Lutomirski &lt;luto@amacapital.net&gt;
Cc: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Cc: Borislav Petkov &lt;bp@alien8.de&gt;
Cc: Brian Gerst &lt;brgerst@gmail.com&gt;
Cc: Denys Vlasenko &lt;dvlasenk@redhat.com&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Jan Beulich &lt;jbeulich@suse.com&gt;
Cc: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Sasha Levin &lt;sasha.levin@oracle.com&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: security@kernel.org &lt;security@kernel.org&gt;
Cc: xen-devel &lt;xen-devel@lists.xen.org&gt;
Link: http://lkml.kernel.org/r/4c6978476782160600471bd865b318db34c7b628.1438291540.git.luto@kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
[bwh: Backported to 3.2:
 - Adjust context
 - Drop comment changes in switch_mm()
 - Drop changes to get_segment_base() in arch/x86/kernel/cpu/perf_event.c
 - Open-code lockless_dereference(), smp_store_release(), on_each_cpu_mask()]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 37868fe113ff2ba814b3b4eb12df214df555f8dc upstream.

modify_ldt() has questionable locking and does not synchronize
threads.  Improve it: redesign the locking and synchronize all
threads' LDTs using an IPI on all modifications.

This will dramatically slow down modify_ldt in multithreaded
programs, but there shouldn't be any multithreaded programs that
care about modify_ldt's performance in the first place.

This fixes some fallout from the CVE-2015-5157 fixes.

Signed-off-by: Andy Lutomirski &lt;luto@kernel.org&gt;
Reviewed-by: Borislav Petkov &lt;bp@suse.de&gt;
Cc: Andrew Cooper &lt;andrew.cooper3@citrix.com&gt;
Cc: Andy Lutomirski &lt;luto@amacapital.net&gt;
Cc: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Cc: Borislav Petkov &lt;bp@alien8.de&gt;
Cc: Brian Gerst &lt;brgerst@gmail.com&gt;
Cc: Denys Vlasenko &lt;dvlasenk@redhat.com&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Jan Beulich &lt;jbeulich@suse.com&gt;
Cc: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Sasha Levin &lt;sasha.levin@oracle.com&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: security@kernel.org &lt;security@kernel.org&gt;
Cc: xen-devel &lt;xen-devel@lists.xen.org&gt;
Link: http://lkml.kernel.org/r/4c6978476782160600471bd865b318db34c7b628.1438291540.git.luto@kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
[bwh: Backported to 3.2:
 - Adjust context
 - Drop comment changes in switch_mm()
 - Drop changes to get_segment_base() in arch/x86/kernel/cpu/perf_event.c
 - Open-code lockless_dereference(), smp_store_release(), on_each_cpu_mask()]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: x86: make vapics_in_nmi_mode atomic</title>
<updated>2015-10-22T01:20:07+00:00</updated>
<author>
<name>Radim Krčmář</name>
<email>rkrcmar@redhat.com</email>
</author>
<published>2015-07-01T13:31:49+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=08c730489602b6e3092e54180f47a7b3bfc32172'/>
<id>08c730489602b6e3092e54180f47a7b3bfc32172</id>
<content type='text'>
commit 42720138b06301cc8a7ee8a495a6d021c4b6a9bc upstream.

Writes were a bit racy, but hard to turn into a bug at the same time.
(Particularly because modern Linux doesn't use this feature anymore.)

Signed-off-by: Radim Krčmář &lt;rkrcmar@redhat.com&gt;
[Actually the next patch makes it much, much easier to trigger the race
 so I'm including this one for stable@ as well. - Paolo]
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 42720138b06301cc8a7ee8a495a6d021c4b6a9bc upstream.

Writes were a bit racy, but hard to turn into a bug at the same time.
(Particularly because modern Linux doesn't use this feature anymore.)

Signed-off-by: Radim Krčmář &lt;rkrcmar@redhat.com&gt;
[Actually the next patch makes it much, much easier to trigger the race
 so I'm including this one for stable@ as well. - Paolo]
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>x86, kvm: fix kvm's usage of kernel_fpu_begin/end()</title>
<updated>2015-09-18T01:20:46+00:00</updated>
<author>
<name>Suresh Siddha</name>
<email>suresh.b.siddha@intel.com</email>
</author>
<published>2012-09-20T18:01:49+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=b10e02daa7abe24c0431cc7a41ead5f628839c01'/>
<id>b10e02daa7abe24c0431cc7a41ead5f628839c01</id>
<content type='text'>
commit b1a74bf8212367be2b1d6685c11a84e056eaaaf1 upstream.

Preemption is disabled between kernel_fpu_begin/end() and as such
it is not a good idea to use these routines in kvm_load/put_guest_fpu()
which can be very far apart.

kvm_load/put_guest_fpu() routines are already called with
preemption disabled and KVM already uses the preempt notifier to save
the guest fpu state using kvm_put_guest_fpu().

So introduce __kernel_fpu_begin/end() routines which don't touch
preemption and use them instead of kernel_fpu_begin/end()
for KVM's use model of saving/restoring guest FPU state.

Also with this change (and with eagerFPU model), fix the host cr0.TS vm-exit
state in the case of VMX. For eagerFPU case, host cr0.TS is always clear.
So no need to worry about it. For the traditional lazyFPU restore case,
change the cr0.TS bit for the host state during vm-exit to be always clear
and cr0.TS bit is set in the __vmx_load_host_state() when the FPU
(guest FPU or the host task's FPU) state is not active. This ensures
that the host/guest FPU state is properly saved, restored
during context-switch and with interrupts (using irq_fpu_usable()) not
stomping on the active FPU state.

Signed-off-by: Suresh Siddha &lt;suresh.b.siddha@intel.com&gt;
Link: http://lkml.kernel.org/r/1348164109.26695.338.camel@sbsiddha-desk.sc.intel.com
Cc: Avi Kivity &lt;avi@redhat.com&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@linux.intel.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
[xr: Backported to 3.4: Adjust context]
Signed-off-by: Rui Xiang &lt;rui.xiang@huawei.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit b1a74bf8212367be2b1d6685c11a84e056eaaaf1 upstream.

Preemption is disabled between kernel_fpu_begin/end() and as such
it is not a good idea to use these routines in kvm_load/put_guest_fpu()
which can be very far apart.

kvm_load/put_guest_fpu() routines are already called with
preemption disabled and KVM already uses the preempt notifier to save
the guest fpu state using kvm_put_guest_fpu().

So introduce __kernel_fpu_begin/end() routines which don't touch
preemption and use them instead of kernel_fpu_begin/end()
for KVM's use model of saving/restoring guest FPU state.

Also with this change (and with eagerFPU model), fix the host cr0.TS vm-exit
state in the case of VMX. For eagerFPU case, host cr0.TS is always clear.
So no need to worry about it. For the traditional lazyFPU restore case,
change the cr0.TS bit for the host state during vm-exit to be always clear
and cr0.TS bit is set in the __vmx_load_host_state() when the FPU
(guest FPU or the host task's FPU) state is not active. This ensures
that the host/guest FPU state is properly saved, restored
during context-switch and with interrupts (using irq_fpu_usable()) not
stomping on the active FPU state.

Signed-off-by: Suresh Siddha &lt;suresh.b.siddha@intel.com&gt;
Link: http://lkml.kernel.org/r/1348164109.26695.338.camel@sbsiddha-desk.sc.intel.com
Cc: Avi Kivity &lt;avi@redhat.com&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@linux.intel.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
[xr: Backported to 3.4: Adjust context]
Signed-off-by: Rui Xiang &lt;rui.xiang@huawei.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>x86/iommu: Fix header comments regarding standard and _FINISH macros</title>
<updated>2015-09-18T01:20:28+00:00</updated>
<author>
<name>Aravind Gopalakrishnan</name>
<email>Aravind.Gopalakrishnan@amd.com</email>
</author>
<published>2015-04-09T08:51:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=c53202b9bdc897f358cbf3dd6c1fe8df8ccb9cf0'/>
<id>c53202b9bdc897f358cbf3dd6c1fe8df8ccb9cf0</id>
<content type='text'>
commit b44915927ca88084a7292e4ddd4cf91036f365e1 upstream.

The comment line regarding IOMMU_INIT and IOMMU_INIT_FINISH
macros is incorrect:

  "The standard vs the _FINISH differs in that the _FINISH variant
  will continue detecting other IOMMUs in the call list..."

It should be "..the *standard* variant will continue
detecting..."

Fix that. Also, make it readable while at it.

Signed-off-by: Aravind Gopalakrishnan &lt;Aravind.Gopalakrishnan@amd.com&gt;
Signed-off-by: Borislav Petkov &lt;bp@suse.de&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: konrad.wilk@oracle.com
Fixes: 6e9636693373 ("x86, iommu: Update header comments with appropriate naming")
Link: http://lkml.kernel.org/r/1428508017-5316-1-git-send-email-Aravind.Gopalakrishnan@amd.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit b44915927ca88084a7292e4ddd4cf91036f365e1 upstream.

The comment line regarding IOMMU_INIT and IOMMU_INIT_FINISH
macros is incorrect:

  "The standard vs the _FINISH differs in that the _FINISH variant
  will continue detecting other IOMMUs in the call list..."

It should be "..the *standard* variant will continue
detecting..."

Fix that. Also, make it readable while at it.

Signed-off-by: Aravind Gopalakrishnan &lt;Aravind.Gopalakrishnan@amd.com&gt;
Signed-off-by: Borislav Petkov &lt;bp@suse.de&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: konrad.wilk@oracle.com
Fixes: 6e9636693373 ("x86, iommu: Update header comments with appropriate naming")
Link: http://lkml.kernel.org/r/1428508017-5316-1-git-send-email-Aravind.Gopalakrishnan@amd.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>x86, cpu, amd: Add workaround for family 16h, erratum 793</title>
<updated>2015-04-14T09:34:03+00:00</updated>
<author>
<name>Borislav Petkov</name>
<email>bp@suse.de</email>
</author>
<published>2014-01-14T23:07:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=56a59c3ea2fc70fc8288ca4407277bf53044cd82'/>
<id>56a59c3ea2fc70fc8288ca4407277bf53044cd82</id>
<content type='text'>
commit 3b56496865f9f7d9bcb2f93b44c63f274f08e3b6 upstream.

This adds the workaround for erratum 793 as a precaution in case not
every BIOS implements it.  This addresses CVE-2013-6885.

Erratum text:

[Revision Guide for AMD Family 16h Models 00h-0Fh Processors,
document 51810 Rev. 3.04 November 2013]

793 Specific Combination of Writes to Write Combined Memory Types and
Locked Instructions May Cause Core Hang

Description

Under a highly specific and detailed set of internal timing
conditions, a locked instruction may trigger a timing sequence whereby
the write to a write combined memory type is not flushed, causing the
locked instruction to stall indefinitely.

Potential Effect on System

Processor core hang.

Suggested Workaround

BIOS should set MSR
C001_1020[15] = 1b.

Fix Planned

No fix planned

[ hpa: updated description, fixed typo in MSR name ]

Signed-off-by: Borislav Petkov &lt;bp@suse.de&gt;
Link: http://lkml.kernel.org/r/20140114230711.GS29865@pd.tnic
Tested-by: Aravind Gopalakrishnan &lt;aravind.gopalakrishnan@amd.com&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@linux.intel.com&gt;
[bwh: Backported to 3.2:
 - Adjust filename
 - Venkatesh Srinivas pointed out we should use {rd,wr}msrl_safe() to
   avoid crashing on KVM.  This was fixed upstream by commit 8f86a7373a1c
   ("x86, AMD: Convert to the new bit access MSR accessors") but that's too
   much trouble to backport.  Here we must use {rd,wr}msrl_amd_safe().]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Cc: Moritz Muehlenhoff &lt;jmm@debian.org&gt;
Cc: Venkatesh Srinivas &lt;venkateshs@google.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 3b56496865f9f7d9bcb2f93b44c63f274f08e3b6 upstream.

This adds the workaround for erratum 793 as a precaution in case not
every BIOS implements it.  This addresses CVE-2013-6885.

Erratum text:

[Revision Guide for AMD Family 16h Models 00h-0Fh Processors,
document 51810 Rev. 3.04 November 2013]

793 Specific Combination of Writes to Write Combined Memory Types and
Locked Instructions May Cause Core Hang

Description

Under a highly specific and detailed set of internal timing
conditions, a locked instruction may trigger a timing sequence whereby
the write to a write combined memory type is not flushed, causing the
locked instruction to stall indefinitely.

Potential Effect on System

Processor core hang.

Suggested Workaround

BIOS should set MSR
C001_1020[15] = 1b.

Fix Planned

No fix planned

[ hpa: updated description, fixed typo in MSR name ]

Signed-off-by: Borislav Petkov &lt;bp@suse.de&gt;
Link: http://lkml.kernel.org/r/20140114230711.GS29865@pd.tnic
Tested-by: Aravind Gopalakrishnan &lt;aravind.gopalakrishnan@amd.com&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@linux.intel.com&gt;
[bwh: Backported to 3.2:
 - Adjust filename
 - Venkatesh Srinivas pointed out we should use {rd,wr}msrl_safe() to
   avoid crashing on KVM.  This was fixed upstream by commit 8f86a7373a1c
   ("x86, AMD: Convert to the new bit access MSR accessors") but that's too
   much trouble to backport.  Here we must use {rd,wr}msrl_amd_safe().]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Cc: Moritz Muehlenhoff &lt;jmm@debian.org&gt;
Cc: Venkatesh Srinivas &lt;venkateshs@google.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>x86/tls: Don't validate lm in set_thread_area() after all</title>
<updated>2015-04-14T09:33:46+00:00</updated>
<author>
<name>Andy Lutomirski</name>
<email>luto@amacapital.net</email>
</author>
<published>2014-12-17T22:48:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=54c93a65f5ac2f4d2d4a2a2eb633191a2527e5ab'/>
<id>54c93a65f5ac2f4d2d4a2a2eb633191a2527e5ab</id>
<content type='text'>
commit 3fb2f4237bb452eb4e98f6a5dbd5a445b4fed9d0 upstream.

It turns out that there's a lurking ABI issue.  GCC, when
compiling this in a 32-bit program:

struct user_desc desc = {
	.entry_number    = idx,
	.base_addr       = base,
	.limit           = 0xfffff,
	.seg_32bit       = 1,
	.contents        = 0, /* Data, grow-up */
	.read_exec_only  = 0,
	.limit_in_pages  = 1,
	.seg_not_present = 0,
	.useable         = 0,
};

will leave .lm uninitialized.  This means that anything in the
kernel that reads user_desc.lm for 32-bit tasks is unreliable.

Revert the .lm check in set_thread_area().  The value never did
anything in the first place.

Fixes: 0e58af4e1d21 ("x86/tls: Disallow unusual TLS segments")
Signed-off-by: Andy Lutomirski &lt;luto@amacapital.net&gt;
Acked-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Link: http://lkml.kernel.org/r/d7875b60e28c512f6a6fc0baf5714d58e7eaadbb.1418856405.git.luto@amacapital.net
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
[lizf: Backported to 3.4: adjust filename]
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 3fb2f4237bb452eb4e98f6a5dbd5a445b4fed9d0 upstream.

It turns out that there's a lurking ABI issue.  GCC, when
compiling this in a 32-bit program:

struct user_desc desc = {
	.entry_number    = idx,
	.base_addr       = base,
	.limit           = 0xfffff,
	.seg_32bit       = 1,
	.contents        = 0, /* Data, grow-up */
	.read_exec_only  = 0,
	.limit_in_pages  = 1,
	.seg_not_present = 0,
	.useable         = 0,
};

will leave .lm uninitialized.  This means that anything in the
kernel that reads user_desc.lm for 32-bit tasks is unreliable.

Revert the .lm check in set_thread_area().  The value never did
anything in the first place.

Fixes: 0e58af4e1d21 ("x86/tls: Disallow unusual TLS segments")
Signed-off-by: Andy Lutomirski &lt;luto@amacapital.net&gt;
Acked-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Link: http://lkml.kernel.org/r/d7875b60e28c512f6a6fc0baf5714d58e7eaadbb.1418856405.git.luto@amacapital.net
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
[lizf: Backported to 3.4: adjust filename]
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>x86, tls: Interpret an all-zero struct user_desc as "no segment"</title>
<updated>2015-02-02T09:05:25+00:00</updated>
<author>
<name>Andy Lutomirski</name>
<email>luto@amacapital.net</email>
</author>
<published>2015-01-22T19:27:59+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=29adeacbed114e97e7aaee3bb2e4be65ec806dcb'/>
<id>29adeacbed114e97e7aaee3bb2e4be65ec806dcb</id>
<content type='text'>
commit 3669ef9fa7d35f573ec9c0e0341b29251c2734a7 upstream.

The Witcher 2 did something like this to allocate a TLS segment index:

        struct user_desc u_info;
        bzero(&amp;u_info, sizeof(u_info));
        u_info.entry_number = (uint32_t)-1;

        syscall(SYS_set_thread_area, &amp;u_info);

Strictly speaking, this code was never correct.  It should have set
read_exec_only and seg_not_present to 1 to indicate that it wanted
to find a free slot without putting anything there, or it should
have put something sensible in the TLS slot if it wanted to allocate
a TLS entry for real.  The actual effect of this code was to
allocate a bogus segment that could be used to exploit espfix.

The set_thread_area hardening patches changed the behavior, causing
set_thread_area to return -EINVAL and crashing the game.

This changes set_thread_area to interpret this as a request to find
a free slot and to leave it empty, which isn't *quite* what the game
expects but should be close enough to keep it working.  In
particular, using the code above to allocate two segments will
allocate the same segment both times.

According to FrostbittenKing on Github, this fixes The Witcher 2.

If this somehow still causes problems, we could instead allocate
a limit==0 32-bit data segment, but that seems rather ugly to me.

Fixes: 41bdc78544b8 x86/tls: Validate TLS entries to protect espfix
Signed-off-by: Andy Lutomirski &lt;luto@amacapital.net&gt;
Cc: torvalds@linux-foundation.org
Link: http://lkml.kernel.org/r/0cb251abe1ff0958b8e468a9a9a905b80ae3a746.1421954363.git.luto@amacapital.net
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 3669ef9fa7d35f573ec9c0e0341b29251c2734a7 upstream.

The Witcher 2 did something like this to allocate a TLS segment index:

        struct user_desc u_info;
        bzero(&amp;u_info, sizeof(u_info));
        u_info.entry_number = (uint32_t)-1;

        syscall(SYS_set_thread_area, &amp;u_info);

Strictly speaking, this code was never correct.  It should have set
read_exec_only and seg_not_present to 1 to indicate that it wanted
to find a free slot without putting anything there, or it should
have put something sensible in the TLS slot if it wanted to allocate
a TLS entry for real.  The actual effect of this code was to
allocate a bogus segment that could be used to exploit espfix.

The set_thread_area hardening patches changed the behavior, causing
set_thread_area to return -EINVAL and crashing the game.

This changes set_thread_area to interpret this as a request to find
a free slot and to leave it empty, which isn't *quite* what the game
expects but should be close enough to keep it working.  In
particular, using the code above to allocate two segments will
allocate the same segment both times.

According to FrostbittenKing on Github, this fixes The Witcher 2.

If this somehow still causes problems, we could instead allocate
a limit==0 32-bit data segment, but that seems rather ugly to me.

Fixes: 41bdc78544b8 x86/tls: Validate TLS entries to protect espfix
Signed-off-by: Andy Lutomirski &lt;luto@amacapital.net&gt;
Cc: torvalds@linux-foundation.org
Link: http://lkml.kernel.org/r/0cb251abe1ff0958b8e468a9a9a905b80ae3a746.1421954363.git.luto@amacapital.net
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
