aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* crypto: talitos - add IPsec ESN supportHoria Geanta2012-08-281-2/+28
| | | | | | | | | | Support for ESNs (extended sequence numbers). Tested with strongswan on a P2020RDB back-to-back setup. Extracted from /etc/ipsec.conf: esp=aes-sha1-esn-modp4096! Signed-off-by: Horia Geanta <horia.geanta@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: talitos - support for assoc data provided as scatterlistHoria Geanta2012-08-281-51/+125
| | | | | | | | Generate a link table in case assoc data is a scatterlist. While at it, add support for handling non-contiguous assoc data and iv. Signed-off-by: Horia Geanta <horia.geanta@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: talitos - change type and name for [src|dst]_is_chainedHoria Geanta2012-08-281-21/+20
| | | | | | | It's more natural to think of these vars as bool rather than int. Signed-off-by: Horia Geanta <horia.geanta@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: talitos - prune unneeded descriptor allocation paramHoria Geanta2012-08-281-7/+6
| | | | | | | | talitos_edesc_alloc does not need hash_result param. Checking whether dst scatterlist is NULL or not is all that is required. Signed-off-by: Horia Geanta <horia.geanta@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: talitos - fix icv management on outbound directionHoria Geanta2012-08-281-1/+1
| | | | | | | | | | | | | | | | | | For IPsec encryption, in the case when: -the input buffer is fragmented (edesc->src_nents > 0) -the output buffer is not fragmented (edesc->dst_nents = 0) the ICV is not output in the link table, but after the encrypted payload. Copying the ICV must be avoided in this case; consequently the condition edesc->dma_len > 0 must be more specific, i.e. must depend on the type of the output buffer - fragmented or not. Testing was performed by modifying testmgr to support src != dst, since currently native kernel IPsec does in-place encryption (src == dst). Signed-off-by: Horia Geanta <horia.geanta@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: talitos - consolidate common cra_* assignmentsKim Phillips2012-08-281-146/+17
| | | | | | | | | | | the entry points and geniv definitions for all aead, ablkcipher, and hash algorithms are all common; move them to a single assignment in talitos_alg_alloc(). This assumes it's ok to assign a setkey() on non-hmac algs. Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: talitos - consolidate cra_type assignmentsKim Phillips2012-08-281-26/+3
| | | | | | | lighten driver_algs[] by moving them to talitos_alg_alloc(). Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel - Remove possible typo errorTushar Behera2012-08-201-1/+1
| | | | | | | | | Commit bd3c7b5c2aba ("crypto: atmel - add Atmel AES driver") possibly has a typo error of adding an extra CONFIG_. CC: Nicolas Royer <nicolas@eukrea.com> Signed-off-by: Tushar Behera <tushar.behera@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* drivers/char/hw_random/octeon-rng.c: drop frees of devm allocated dataJulia Lawall2012-08-201-12/+5
| | | | | | | | | | | | | | | | | | | | devm_kfree and devm_iounmap should not have to be explicitly used. The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression x,d; @@ x = devm_kzalloc(...) ... ?-devm_kfree(d,x); // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: nx - Remove virt_to_abs() usage in nx-842.cMichael Ellerman2012-08-201-16/+18
| | | | | | | | | | | | | | | virt_to_abs() is just a wrapper around __pa(), use __pa() directly. We should be including <asm/page.h> to get __pa(). abs_addr.h will be removed shortly so drop that. We were getting of.h via abs_addr.h so we need to include that directly. Having done all that, clean up the ordering of the includes. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aesni_intel - improve lrw and xts performance by utilizing parallel ↵Jussi Kivilinna2012-08-202-35/+220
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | AES-NI hardware pipelines Use parallel LRW and XTS encryption facilities to better utilize AES-NI hardware pipelines and gain extra performance. Tcrypt benchmark results (async), old vs new ratios: Intel Core i5-2450M CPU (fam: 6, model: 42, step: 7) aes:128bit lrw:256bit xts:256bit size lrw-enc lrw-dec xts-dec xts-dec 16B 0.99x 1.00x 1.22x 1.19x 64B 1.38x 1.50x 1.58x 1.61x 256B 2.04x 2.02x 2.27x 2.29x 1024B 2.56x 2.54x 2.89x 2.92x 8192B 2.85x 2.99x 3.40x 3.23x aes:192bit lrw:320bit xts:384bit size lrw-enc lrw-dec xts-dec xts-dec 16B 1.08x 1.08x 1.16x 1.17x 64B 1.48x 1.54x 1.59x 1.65x 256B 2.18x 2.17x 2.29x 2.28x 1024B 2.67x 2.67x 2.87x 3.05x 8192B 2.93x 2.84x 3.28x 3.33x aes:256bit lrw:348bit xts:512bit size lrw-enc lrw-dec xts-dec xts-dec 16B 1.07x 1.07x 1.18x 1.19x 64B 1.56x 1.56x 1.70x 1.71x 256B 2.22x 2.24x 2.46x 2.46x 1024B 2.76x 2.77x 3.13x 3.05x 8192B 2.99x 3.05x 3.40x 3.30x Cc: Huang Ying <ying.huang@intel.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Reviewed-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* powerpc/crypto: add 842 crypto driverSeth Jennings2012-08-013-0/+193
| | | | | | | | | | | | | | | | | | This patch add the 842 cryptographic API driver that submits compression requests to the 842 hardware compression accelerator driver (nx-compress). If the hardware accelerator goes offline for any reason (dynamic disable, migration, etc...), this driver will use LZO as a software failover for all future compression requests. For decompression requests, the 842 hardware driver contains a software implementation of the 842 decompressor to support the decompression of data that was compressed before the accelerator went offline. Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* powerpc/crypto: add 842 hardware compression driverSeth Jennings2012-08-015-0/+1644
| | | | | | | | | | | | | | | | | This patch adds the driver for interacting with the 842 compression accelerator on IBM Power7+ systems. The device is a child of the Platform Facilities Option (PFO) and shows up as a child of the IBM VIO bus. The compression/decompression API takes the same arguments as existing compression methods like lzo and deflate. The 842 hardware operates on 4K hardware pages and the driver breaks up input on 4K boundaries to submit it to the hardware accelerator. Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* powerpc/crypto: add compression support to arch vecSeth Jennings2012-08-011-2/+2
| | | | | | | | | This patch enables compression engine support in the architecture vector. This causes the Power hypervisor to allow access to the nx comrpession accelerator. Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* powerpc/crypto: rework KconfigSeth Jennings2012-08-015-16/+29
| | | | | | | | | | | | | This patch creates a new submenu for the NX cryptographic hardware accelerator and breaks the NX options into their own Kconfig file under drivers/crypto/nx/Kconfig. This will permit additional NX functionality to be easily and more cleanly added in the future without touching drivers/crypto/Makefile|Kconfig. Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: caam - set descriptor sharing type to SERIALKim Phillips2012-08-013-7/+7
| | | | | | | | | | | | | | | SHARE_WAIT, whilst more optimal for association-less crypto, has the ability to start thrashing the CCB descriptor/key caches, given high levels of traffic across multiple security associations (and thus keys). Switch to using the SERIAL sharing type, which prefers the last used CCB for the SA. On a 2-DECO platform such as the P3041, this can improve performance by about 3.7%. Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: caam - add backward compatible string sec4.0Shengzhou Liu2012-08-012-6/+15
| | | | | | | | | | | | | | | In some device trees of previous version, there were string "fsl,sec4.0". To be backward compatible with device trees, we first check "fsl,sec-v4.0", if it fails, then check for "fsl,sec4.0". Signed-off-by: Shengzhou Liu <Shengzhou.Liu@freescale.com> extended to include new hash and rng code, which was omitted from the previous version of this patch during a rebase of the SDK version. Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: caam - fix possible deadlock conditionKim Phillips2012-08-011-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit "crypto: caam - use non-irq versions of spinlocks for job rings" made two bad assumptions: (a) The caam_jr_enqueue lock isn't used in softirq context. Not true: jr_enqueue can be interrupted by an incoming net interrupt and the received packet may be sent for encryption, via caam_jr_enqueue in softirq context, thereby inducing a deadlock. This is evidenced when running netperf over an IPSec tunnel between two P4080's, with spinlock debugging turned on: [ 892.092569] BUG: spinlock lockup on CPU#7, netperf/10634, e8bf5f70 [ 892.098747] Call Trace: [ 892.101197] [eff9fc10] [c00084c0] show_stack+0x48/0x15c (unreliable) [ 892.107563] [eff9fc50] [c0239c2c] do_raw_spin_lock+0x16c/0x174 [ 892.113399] [eff9fc80] [c0596494] _raw_spin_lock+0x3c/0x50 [ 892.118889] [eff9fc90] [c0445e74] caam_jr_enqueue+0xf8/0x250 [ 892.124550] [eff9fcd0] [c044a644] aead_decrypt+0x6c/0xc8 [ 892.129625] BUG: spinlock lockup on CPU#5, swapper/5/0, e8bf5f70 [ 892.129629] Call Trace: [ 892.129637] [effa7c10] [c00084c0] show_stack+0x48/0x15c (unreliable) [ 892.129645] [effa7c50] [c0239c2c] do_raw_spin_lock+0x16c/0x174 [ 892.129652] [effa7c80] [c0596494] _raw_spin_lock+0x3c/0x50 [ 892.129660] [effa7c90] [c0445e74] caam_jr_enqueue+0xf8/0x250 [ 892.129666] [effa7cd0] [c044a644] aead_decrypt+0x6c/0xc8 [ 892.129674] [effa7d00] [c0509724] esp_input+0x178/0x334 [ 892.129681] [effa7d50] [c0519778] xfrm_input+0x77c/0x818 [ 892.129688] [effa7da0] [c050e344] xfrm4_rcv_encap+0x20/0x30 [ 892.129697] [effa7db0] [c04b90c8] ip_local_deliver+0x190/0x408 [ 892.129703] [effa7de0] [c04b966c] ip_rcv+0x32c/0x898 [ 892.129709] [effa7e10] [c048b998] __netif_receive_skb+0x27c/0x4e8 [ 892.129715] [effa7e80] [c048d744] netif_receive_skb+0x4c/0x13c [ 892.129726] [effa7eb0] [c03c28ac] _dpa_rx+0x1a8/0x354 [ 892.129732] [effa7ef0] [c03c2ac4] ingress_rx_default_dqrr+0x6c/0x108 [ 892.129742] [effa7f10] [c0467ae0] qman_poll_dqrr+0x170/0x1d4 [ 892.129748] [effa7f40] [c03c153c] dpaa_eth_poll+0x20/0x94 [ 892.129754] [effa7f60] [c048dbd0] net_rx_action+0x13c/0x1f4 [ 892.129763] [effa7fa0] [c003d1b8] __do_softirq+0x108/0x1b0 [ 892.129769] [effa7ff0] [c000df58] call_do_softirq+0x14/0x24 [ 892.129775] [ebacfe70] [c0004868] do_softirq+0xd8/0x104 [ 892.129780] [ebacfe90] [c003d5a4] irq_exit+0xb8/0xd8 [ 892.129786] [ebacfea0] [c0004498] do_IRQ+0xa4/0x1b0 [ 892.129792] [ebacfed0] [c000fad8] ret_from_except+0x0/0x18 [ 892.129798] [ebacff90] [c0009010] cpu_idle+0x94/0xf0 [ 892.129804] [ebacffb0] [c059ff88] start_secondary+0x42c/0x430 [ 892.129809] [ebacfff0] [c0001e28] __secondary_start+0x30/0x84 [ 892.281474] [ 892.282959] [eff9fd00] [c0509724] esp_input+0x178/0x334 [ 892.288186] [eff9fd50] [c0519778] xfrm_input+0x77c/0x818 [ 892.293499] [eff9fda0] [c050e344] xfrm4_rcv_encap+0x20/0x30 [ 892.299074] [eff9fdb0] [c04b90c8] ip_local_deliver+0x190/0x408 [ 892.304907] [eff9fde0] [c04b966c] ip_rcv+0x32c/0x898 [ 892.309872] [eff9fe10] [c048b998] __netif_receive_skb+0x27c/0x4e8 [ 892.315966] [eff9fe80] [c048d744] netif_receive_skb+0x4c/0x13c [ 892.321803] [eff9feb0] [c03c28ac] _dpa_rx+0x1a8/0x354 [ 892.326855] [eff9fef0] [c03c2ac4] ingress_rx_default_dqrr+0x6c/0x108 [ 892.333212] [eff9ff10] [c0467ae0] qman_poll_dqrr+0x170/0x1d4 [ 892.338872] [eff9ff40] [c03c153c] dpaa_eth_poll+0x20/0x94 [ 892.344271] [eff9ff60] [c048dbd0] net_rx_action+0x13c/0x1f4 [ 892.349846] [eff9ffa0] [c003d1b8] __do_softirq+0x108/0x1b0 [ 892.355338] [eff9fff0] [c000df58] call_do_softirq+0x14/0x24 [ 892.360910] [e7169950] [c0004868] do_softirq+0xd8/0x104 [ 892.366135] [e7169970] [c003d5a4] irq_exit+0xb8/0xd8 [ 892.371101] [e7169980] [c0004498] do_IRQ+0xa4/0x1b0 [ 892.375979] [e71699b0] [c000fad8] ret_from_except+0x0/0x18 [ 892.381466] [e7169a70] [c0445e74] caam_jr_enqueue+0xf8/0x250 [ 892.387127] [e7169ab0] [c044ad4c] aead_givencrypt+0x6ac/0xa70 [ 892.392873] [e7169b20] [c050a0b8] esp_output+0x2b4/0x570 [ 892.398186] [e7169b80] [c0519b9c] xfrm_output_resume+0x248/0x7c0 [ 892.404194] [e7169bb0] [c050e89c] xfrm4_output_finish+0x18/0x28 [ 892.410113] [e7169bc0] [c050e8f4] xfrm4_output+0x48/0x98 [ 892.415427] [e7169bd0] [c04beac0] ip_local_out+0x48/0x98 [ 892.420740] [e7169be0] [c04bec7c] ip_queue_xmit+0x16c/0x490 [ 892.426314] [e7169c10] [c04d6128] tcp_transmit_skb+0x35c/0x9a4 [ 892.432147] [e7169c70] [c04d6f98] tcp_write_xmit+0x200/0xa04 [ 892.437808] [e7169cc0] [c04c8ccc] tcp_sendmsg+0x994/0xcec [ 892.443213] [e7169d40] [c04eebfc] inet_sendmsg+0xd0/0x164 [ 892.448617] [e7169d70] [c04792f8] sock_sendmsg+0x8c/0xbc [ 892.453931] [e7169e40] [c047aecc] sys_sendto+0xc0/0xfc [ 892.459069] [e7169f10] [c047b934] sys_socketcall+0x110/0x25c [ 892.464729] [e7169f40] [c000f480] ret_from_syscall+0x0/0x3c (b) since the caam_jr_dequeue lock is only used in bh context, then semantically it should use _bh spin_lock types. spin_lock_bh semantics are to disable back-halves, and used when a lock is shared between softirq (bh) context and process and/or h/w IRQ context. Since the lock is only used within softirq context, and this tasklet is atomic, there is no need to do the additional work to disable back halves. This patch adds back-half disabling protection to caam_jr_enqueue spin_locks to fix (a), and drops it from caam_jr_dequeue to fix (b). Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cast6 - add x86_64/avx assembler implementationJohannes Goetzfried2012-08-015-0/+1062
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a x86_64/avx assembler implementation of the Cast6 block cipher. The implementation processes eight blocks in parallel (two 4 block chunk AVX operations). The table-lookups are done in general-purpose registers. For small blocksizes the functions from the generic module are called. A good performance increase is provided for blocksizes greater or equal to 128B. Patch has been tested with tcrypt and automated filesystem tests. Tcrypt benchmark results: Intel Core i5-2500 CPU (fam:6, model:42, step:7) cast6-avx-x86_64 vs. cast6-generic 128bit key: (lrw:256bit) (xts:256bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 0.97x 1.00x 1.01x 1.01x 0.99x 0.97x 0.98x 1.01x 0.96x 0.98x 64B 0.98x 0.99x 1.02x 1.01x 0.99x 1.00x 1.01x 0.99x 1.00x 0.99x 256B 1.77x 1.84x 0.99x 1.85x 1.77x 1.77x 1.70x 1.74x 1.69x 1.72x 1024B 1.93x 1.95x 0.99x 1.96x 1.93x 1.93x 1.84x 1.85x 1.89x 1.87x 8192B 1.91x 1.95x 0.99x 1.97x 1.95x 1.91x 1.86x 1.87x 1.93x 1.90x 256bit key: (lrw:384bit) (xts:512bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 0.97x 0.99x 1.02x 1.01x 0.98x 0.99x 1.00x 1.00x 0.98x 0.98x 64B 0.98x 0.99x 1.01x 1.00x 1.00x 1.00x 1.01x 1.01x 0.97x 1.00x 256B 1.77x 1.83x 1.00x 1.86x 1.79x 1.78x 1.70x 1.76x 1.71x 1.69x 1024B 1.92x 1.95x 0.99x 1.96x 1.93x 1.93x 1.83x 1.86x 1.89x 1.87x 8192B 1.94x 1.95x 0.99x 1.97x 1.95x 1.95x 1.87x 1.87x 1.93x 1.91x Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - add larger cast6 testvectorsJohannes Goetzfried2012-08-013-2/+1520
| | | | | | | | | New ECB, CBC, CTR, LRW and XTS testvectors for cast6. We need larger testvectors to check parallel code paths in the optimized implementation. Tests have also been added to the tcrypt module. Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cast6 - prepare generic module for optimized implementationsJohannes Goetzfried2012-08-013-24/+67
| | | | | | | | | Rename cast6 module to cast6_generic to allow autoloading of optimized implementations. Generic functions and s-boxes are exported to be able to use them within optimized implementations. Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cast5 - add x86_64/avx assembler implementationJohannes Goetzfried2012-08-015-0/+928
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a x86_64/avx assembler implementation of the Cast5 block cipher. The implementation processes sixteen blocks in parallel (four 4 block chunk AVX operations). The table-lookups are done in general-purpose registers. For small blocksizes the functions from the generic module are called. A good performance increase is provided for blocksizes greater or equal to 128B. Patch has been tested with tcrypt and automated filesystem tests. Tcrypt benchmark results: Intel Core i5-2500 CPU (fam:6, model:42, step:7) cast5-avx-x86_64 vs. cast5-generic 64bit key: size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec 16B 0.99x 0.99x 1.00x 1.00x 1.02x 1.01x 64B 1.00x 1.00x 0.98x 1.00x 1.01x 1.02x 256B 2.03x 2.01x 0.95x 2.11x 2.12x 2.13x 1024B 2.30x 2.24x 0.95x 2.29x 2.35x 2.35x 8192B 2.31x 2.27x 0.95x 2.31x 2.39x 2.39x 128bit key: size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec 16B 0.99x 0.99x 1.00x 1.00x 1.01x 1.01x 64B 1.00x 1.00x 0.98x 1.01x 1.02x 1.01x 256B 2.17x 2.13x 0.96x 2.19x 2.19x 2.19x 1024B 2.29x 2.32x 0.95x 2.34x 2.37x 2.38x 8192B 2.35x 2.32x 0.95x 2.35x 2.39x 2.39x Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - add larger cast5 testvectorsJohannes Goetzfried2012-08-014-2/+871
| | | | | | | | | New ECB, CBC and CTR testvectors for cast5. We need larger testvectors to check parallel code paths in the optimized implementation. Tests have also been added to the tcrypt module. Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cast5 - prepare generic module for optimized implementationsJohannes Goetzfried2012-08-013-34/+69
| | | | | | | | | Rename cast5 module to cast5_generic to allow autoloading of optimized implementations. Generic functions and s-boxes are exported to be able to use them within optimized implementations. Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arch/s390 - cleanup - remove unneeded cra_list initializationJussi Kivilinna2012-08-013-16/+0
| | | | | | | | | | | | | | Initialization of cra_list is currently mixed, most ciphers initialize this field and most shashes do not. Initialization however is not needed at all since cra_list is initialized/overwritten in __crypto_register_alg() with list_add(). Therefore perform cleanup to remove all unneeded initializations of this field in 'arch/s390/crypto/' Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: linux-s390@vger.kernel.org Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: drivers - remove cra_list initializationJussi Kivilinna2012-08-0112-21/+0
| | | | | | | | | | | | | | | | | | | Initialization of cra_list is currently mixed, most ciphers initialize this field and most shashes do not. Initialization however is not needed at all since cra_list is initialized/overwritten in __crypto_register_alg() with list_add(). Therefore perform cleanup to remove all unneeded initializations of this field in 'crypto/drivers/'. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linux-geode@lists.infradead.org Cc: Michal Ludvig <michal@logix.cz> Cc: Dmitry Kasatkin <dmitry.kasatkin@nokia.com> Cc: Varun Wadekar <vwadekar@nvidia.com> Cc: Eric Bénard <eric@eukrea.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Acked-by: Kent Yoder <key@linux.vnet.ibm.com> Acked-by: Vladimir Zapolskiy <vladimir_zapolskiy@mentor.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arch/x86 - cleanup - remove unneeded crypto_alg.cra_list initializationsJussi Kivilinna2012-08-0111-54/+1
| | | | | | | | | | | Initialization of cra_list is currently mixed, most ciphers initialize this field and most shashes do not. Initialization however is not needed at all since cra_list is initialized/overwritten in __crypto_register_alg() with list_add(). Therefore perform cleanup to remove all unneeded initializations of this field in 'arch/x86/crypto/'. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cleanup - remove unneeded crypto_alg.cra_list initializationsJussi Kivilinna2012-08-0115-15/+0
| | | | | | | | | | | Initialization of cra_list is currently mixed, most ciphers initialize this field and most shashes do not. Initialization however is not needed at all since cra_list is initialized/overwritten in __crypto_register_alg() with list_add(). Therefore perform cleanup to remove all unneeded initializations of this field in 'crypto/'. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: whirlpool - use crypto_[un]register_shashesJussi Kivilinna2012-08-011-33/+6
| | | | | | | | Combine all shash algs to be registered and use new crypto_[un]register_shashes functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sha512 - use crypto_[un]register_shashesJussi Kivilinna2012-08-011-15/+5
| | | | | | | | Combine all shash algs to be registered and use new crypto_[un]register_shashes functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sha256 - use crypto_[un]register_shashesJussi Kivilinna2012-08-011-20/+5
| | | | | | | | Combine all shash algs to be registered and use new crypto_[un]register_shashes functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tiger - use crypto_[un]register_shashesJussi Kivilinna2012-08-011-32/+6
| | | | | | | | Combine all shash algs to be registered and use new crypto_[un]register_shashes functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: add crypto_[un]register_shashes for [un]registering multiple shash ↵Jussi Kivilinna2012-08-012-0/+38
| | | | | | | | | | entries at once Add crypto_[un]register_shashes() to allow simplifying init/exit code of shash crypto modules that register multiple algorithms. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ansi_cprng - use crypto_[un]register_algsJussi Kivilinna2012-08-011-40/+23
| | | | | | | | | Combine all crypto_alg to be registered and use new crypto_[un]register_algs functions. This simplifies init/exit code. Cc: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: serpent - use crypto_[un]register_algsJussi Kivilinna2012-08-011-34/+19
| | | | | | | | Combine all crypto_alg to be registered and use new crypto_[un]register_algs functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: des - use crypto_[un]register_algsJussi Kivilinna2012-08-011-20/+5
| | | | | | | | Combine all crypto_alg to be registered and use new crypto_[un]register_algs functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: crypto_null - use crypto_[un]register_algsJussi Kivilinna2012-08-011-39/+18
| | | | | | | | Combine all crypto_alg to be registered and use new crypto_[un]register_algs functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tea - use crypto_[un]register_algsJussi Kivilinna2012-08-011-35/+6
| | | | | | | | Combine all crypto_alg to be registered and use new crypto_[un]register_algs functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6Linus Torvalds2012-07-317-169/+248
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull irqdomain changes from Grant Likely: "Round of refactoring and enhancements to irq_domain infrastructure. This series starts the process of simplifying irqdomain. The ultimate goal is to merge LEGACY, LINEAR and TREE mappings into a single system, but had to back off from that after some last minute bugs. Instead it mainly reorganizes the code and ensures that the reverse map gets populated when the irq is mapped instead of the first time it is looked up. Merging of the irq_domain types is deferred to v3.7 In other news, this series adds helpers for creating static mappings on a linear or tree mapping." * tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6: irqdomain: Improve diagnostics when a domain mapping fails irqdomain: eliminate slow-path revmap lookups irqdomain: Fix irq_create_direct_mapping() to test irq_domain type. irqdomain: Eliminate dedicated radix lookup functions irqdomain: Support for static IRQ mapping and association. irqdomain: Always update revmap when setting up a virq irqdomain: Split disassociating code into separate function irq_domain: correct a minor wrong comment for linear revmap irq_domain: Standardise legacy/linear domain selection irqdomain: Make ops->map hook optional irqdomain: Remove unnecessary test for IRQ_DOMAIN_MAP_LEGACY irqdomain: Simple NUMA awareness. devicetree: add helper inline for retrieving a node's full name
| * irqdomain: Improve diagnostics when a domain mapping failsMark Brown2012-07-241-6/+11
| | | | | | | | | | | | | | | | | | When the map operation fails log the error code we get and add a WARN_ON() so we get a backtrace (which should help work out which interrupt is the source of the issue). Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| * irqdomain: eliminate slow-path revmap lookupsGrant Likely2012-07-241-40/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the current state of irq_domain, the reverse map is always updated when new IRQs get mapped. This means that the irq_find_mapping() function can be simplified to execute the revmap lookup functions unconditionally This patch adds lookup functions for the revmaps that don't yet have one and removes the slow path lookup code path. v8: Broke out unrelated changes into separate patches. Rebased on Paul's irq association patches. v7: Rebased to irqdomain/next for v3.4 and applied before the removal of 'hint' v6: Remove the slow path entirely. The only place where the slow path could get called is for a linear mapping if the hwirq number is larger than the linear revmap size. There shouldn't be any interrupt controllers that do that. v5: rewrite to not use a ->revmap() callback. It is simpler, smaller, safer and faster to open code each of the revmap lookups directly into irq_find_mapping() via a switch statement. v4: Fix build failure on incorrect variable reference. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Milton Miller <miltonm@bga.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Rob Herring <rob.herring@calxeda.com>
| * Merge remote-tracking branch 'origin' into irqdomain/nextGrant Likely2012-07-244548-104854/+181547
| |\
| * | irqdomain: Fix irq_create_direct_mapping() to test irq_domain type.Grant Likely2012-07-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | irq_create_direct_mapping can only be used with the NOMAP type. Make the function test to ensure it is passed the correct type of irq_domain. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | irqdomain: Eliminate dedicated radix lookup functionsGrant Likely2012-07-114-65/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation to remove the slow revmap path, eliminate the public radix revmap lookup functions. This simplifies the code and makes the slowpath removal patch a lot simpler. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | irqdomain: Support for static IRQ mapping and association.Grant Likely2012-07-112-25/+107
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a new strict mapping API for supporting creation of linux IRQs at existing positions within the domain. The new routines are as follows: For dynamic allocation and insertion to specified ranges: - irq_create_identity_mapping() - irq_create_strict_mappings() These will allocate and associate a range of linux IRQs at the specified location. This can be used by controllers that have their own static linux IRQ definitions to map a hwirq range to, as well as for platforms that wish to establish 1:1 identity mapping between linux and hwirq space. For insertion to specified ranges by platforms that do their own irq_desc management: - irq_domain_associate() - irq_domain_associate_many() These in turn call back in to the domain's ->map() routine, for further processing by the platform. Disassociation of IRQs get handled through irq_dispose_mapping() as normal. With these in place it should be possible to begin migration of legacy IRQ domains to linear ones, without requiring special handling for static vs dynamic IRQ definitions in DT vs non-DT paths. This also makes it possible for domains with static mappings to adopt whichever tree model best fits their needs, rather than simply restricting them to linear revmaps. Signed-off-by: Paul Mundt <lethal@linux-sh.org> [grant.likely: Reorganized irq_domain_associate{,_many} to have all logic in one place] [grant.likely: Add error checking for unallocated irq_descs at associate time] Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | irqdomain: Always update revmap when setting up a virqGrant Likely2012-07-112-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At irq_setup_virq() time all of the data needed to update the reverse map is available, but the current code ignores it and relies upon the slow path to insert revmap records. This patch adds revmap updating to the setup path so the slow path will no longer be necessary. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | irqdomain: Split disassociating code into separate functionGrant Likely2012-07-111-28/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch moves the irq disassociation code out into a separate function in preparation to extend irq_setup_virq to handle multiple irqs and rename it for use by interrupt controller drivers. The new function will be used by irq_setup_virq() in its error path. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | Merge tag 'v3.5-rc6' into irqdomain/nextGrant Likely2012-07-11995-5037/+9700
| |\ \ | | | | | | | | | | | | Linux 3.5-rc6
| * | | irq_domain: correct a minor wrong comment for linear revmapDong Aisheng2012-07-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The revmap type should be linear for irq_domain_add_linear function. Signed-off-by: Dong Aisheng <dong.aisheng@linaro.org> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| * | | irq_domain: Standardise legacy/linear domain selectionMark Brown2012-07-113-0/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A large proportion of interrupt controllers that support legacy mappings do so because non-DT systems need to use fixed IRQ numbers when registering devices via buses but can otherwise use a linear mapping. The interrupt controller itself typically is not affected by the mapping used and best practice is to use a linear mapping where possible so drivers frequently select at runtime depending on if a legacy range has been allocated to them. Standardise this behaviour by providing irq_domain_register_simple() which will allocate a linear mapping unless a positive first_irq is provided in which case it will fall back to a legacy mapping. This helps make best practice for irq_domain adoption clearer. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>