View Issue Details [ Jump to Notes ] | [ Issue History ] [ Print ] | ||||||||
ID | Project | Category | View Status | Date Submitted | Last Update | ||||
---|---|---|---|---|---|---|---|---|---|
0000144 | mercury | Bug | public | 2010-04-17 22:10 | 2011-02-08 14:47 | ||||
Reporter | pbone | ||||||||
Assigned To | pbone | ||||||||
Priority | normal | Severity | minor | Reproducibility | sometimes | ||||
Status | closed | Resolution | fixed | ||||||
Product Version | |||||||||
Target Version | Fixed in Version | ||||||||
Summary | 0000144: The --max-contexts-per-thread runtime option is not obeyed in asm_fast.par grades. | ||||||||
Description | --max-contexts-per-thread was used to limit the number of contexts created by the Mercury runtime. It was used by the runtime decision to schedule a spark locally or globally. Scheduling a spark globally would require creating a new context, where as local sparks for the current parallel conjunction can be executed using the current context. Since implementing work stealing for sparks all sparks are scheduled locally, and te --max-contexts-per-thread option is no-longer obeyed. In some cases (I assume doubly-recursive code such as quicksort) where the left and right arguments of the & operator cannot be swapped or are both deeply recursive this could create an overwhelming number of contexts and use too much memory. | ||||||||
Tags | No tags attached. | ||||||||
Attached Files |
|
Issue History | |||
Date Modified | Username | Field | Change |
---|---|---|---|
2010-04-17 22:10 | pbone | New Issue | |
2010-04-17 22:11 | pbone | Status | new => assigned |
2010-04-17 22:11 | pbone | Assigned To | => pbone |
2011-02-08 14:47 | pbone | Note Added: 0000312 | |
2011-02-08 14:47 | pbone | Status | assigned => closed |
2011-02-08 14:47 | pbone | Resolution | open => fixed |