Process coredump contains info which is used to debug the crash. This data size can be bigger than allowed core file size limit. In such case, data is truncated and so, coredump file is incomplete for debugger (gdb) to process.
Coredump contains process memory snapshot. Its actual size varies based on memory allocation by process. If lots of memory is allocated, then core-file size requirement can exceed limit enforced by OS. So, truncated core-dump will be generated.
In linux, system limit for coredump can be configured as unlimited. In the C shell, you can set the maximum allowable core file size using the limit command (see the limit(1) man page). In the Bourne shell and Korn shell, use the ulimit command (see the limit(1) man page). After change, example output will be as below(Ref: core file size).
root@ubox:/home/ec/project1# ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128077 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 128077 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Below command will enable core-dump generation to /var/crash.
Enable unlimited core-dump generated at /var/crash
ulimit -c unlimited
echo "/var/crash/core.%e.%p.%s" > /proc/sys/kernel/core_pattern
http://docs.oracle.com/cd/E19205-01/819-5257/blabs/index.html
http://unix.stackexchange.com/questions/155389/can-anything-useful-be-done-with-a-truncated-core
https://kb-stage.netapp.com/support/index?page=content&id=1014018&cat=STORAGEGRID&actp=LIST
https://sigquit.wordpress.com/2009/03/13/the-core-pattern/