Search this site
Embedded Files
Skip to main content
Skip to navigation
pabensch
Peter A. Bensch, PhD
Research & Development
CRL Newsletter 5.2 (c) Center for Research in Language, UCSD, March, 1991
1. Introduction
2. The Connectionist Revolution
3. Lexical Category Structure
4. Type-Token Distinctions
5. Transformations
6. Grammatical Structure
7. Conclusion, Notes & References
MOL3: 3rd Meeting on Mathematics of Language
MOL3-05: Our Word-Categorization Model
MOL3-09: Model Details
MOL3-13: Word Abstraction
MOL3-15: Ambiguous Words
MOL3-19: Summary
MOL3-Appendix: Zelig Harris
Occurrence-Based Word Categorization (c) Peter Allan Bensch, 1993
0.0 Cover Sheet, Acknowledgements & Vita
0.1 Abstract
1. An Introduction to Occurrence-Based Processing
1.1 The Bootstrapping Problem
1.2 Harris and Structuralist Word Categorization
1.3 Elman’s Model of Word Categorization
1.4 Occurrence-Based Processing
1.5 Summary and Future Directions
2. Exploring Structure in Time
2.1 Introduction
2.2 Elman’s Neural-Net Model of Word Categorization
2.3 Analysis of the Outputs
2.4 A Different Clustering Technique
2.5 "Occurrence-Based" Word Representation
2.6 A Need for Frequency as well
2.7 Conclusion & Notes
3. Occurrence-Based Word Categorization
3.1 Word Contexts
3.2 Grouping Words
3.3 Word Abstraction
3.4 Iterative Clustering
3.5 Conclusion & Notes
4. Transition to a More "Realistic" Corpus
4.1 Issues with a "Realistic" Corpus
4.2 The LOB Corpus
4.3 Analyzing LOB Corpus Tags
4.4 Conclusion & Notes
5. Grouping Words from the LOB Corpus
5.1 The Tagged Corpora
5.10 Harris Revisted
5.11 Conclusion
5.2 Iterative Clustering
5.3 Automatically Stopping Iterative Clustering
5.4 Classifying the Remainder of the Lexicon
5.5 How Good is this Classification?
5.6 Expanding the CORE
5.7 How Good is this Extended Classification?
5.8 Word Splitting
5.9 Examining some Actual Word Groupings
6. Grouping Words from an Untagged Corpus
6.1 The Untagged Corpus
6.2 Grouping the Words of the Untagged Corpus
6.3 How Good is the Word Grouping?
6.4 Examining Actual Word Groups
6.5 Conclusion
7. Summary and Future Research
7.1 What We Have Learned from Occurrence-Based Word Categorization
7.2 Future Research
7.3 Conclusion
9. References
Algorithms
1. Single-Pass Clustering
2. Multi-Pass Clustering (with Abstraction)
3. Iterative Clustering (with Abstraction)
4. Iterative Clustering (for "real" text)
5. Iterative Clustering (used on real text)
6. Expanding the CORE GROUPING
TEPR 2006: Network Monitoring
"Remedies" Summary
Agents "break"
Appendix 1: IPS "False Positives"
Appendix 2: Monitoring: Host Management (Interim Solution)
Appendix 2: Monitoring: Learning "Normal" Patterns
Appendix 2: SHS Network Scans
Appendix 3: Occurrence-Based Processing
Appendix 4: Foreign Hosts
General Observations
How we got started
Introductory Slides
Monitoring Research - Our Goals
Monitoring: Conclusions
Protecting Medical Computer Networks
Protecting Our Computer Medical Networks
Quantifying "common knowledge"
Quantifying Risk
References: Agents & Host Management
SDSU Student Health Services Infrastructure
The Real Problem - False Negatives/Positives
UCSD CSE-264C: Advanced Compiler Projects (c) Peter A. Bensch, 1988
001. Lisp Parser
008. SLR Parser Table Generation
015a. Integrating LISP SLR-GEN with LISP Parser
018. Precedence Parser using Graham-Rhodes Error Recovery: Table Generation
025. Precedence Parser using Graham-Rhodes Error Recovery: LISP Parser
030. Precedence Parser using Graham-Rhodes Error Recovery: Adding Error Recovery
035. Precedence Parser using Graham-Rhodes Error Recovery: Test Run
043. SLR Parser using Graham-Rhodes Error Recovery
046. SLR Parser using Graham-Rhodes Error Recovery: Test Runs
053. Graham-Rhodes Error Recovery Failure
059. Graham-Rhodes Error Recovery Failure: Test Runs
My Web Links
pabensch
MOL3-05: Our Word-Categorization Model
Previous
Next
Google Sites
Report abuse
Page details
Page updated
Google Sites
Report abuse