posted by user: ultimatebeaver || 11181 views || tracked by 16 users: [display]

L@S 2017 : L@S 2017: Fourth Annual ACM Conference on Learning at Scale

FacebookTwitterLinkedInGoogle

Link: http://learningatscale.acm.org/las2017/
 
When Apr 20, 2017 - Apr 21, 2017
Where Massachusetts Institute of Technology, C
Submission Deadline Oct 25, 2016
Notification Due Dec 14, 2016
Final Version Due Feb 10, 2017
Categories    moocs   educational data mining   data mining   learning analytics
 

Call For Papers

The goal of this conference is to promote scientific exchange of interdisciplinary research at the intersection of the learning sciences and computer science. Inspired by the emergence of Massive Open Online Courses (MOOCs) and the accompanying huge shift in thinking about education, this conference was created by ACM as a new scholarly venue and key focal point for the review and presentation of the highest quality research on how learning and teaching can change and improve when done at scale.

MIT’s Office of Digital Learning (ODL) aims to transform teaching and learning at MIT and around the globe through the innovative use of digital technologies. ODL extends MIT’s mens et manus (mind and hand) approach to digital learning, uniquely combining digital tools with individualized teaching, research-driven methodology, an ethos for open sharing, and the in-person magic of MIT — for students at MIT, and for learners around the world. Through its many strategic education initiatives, ODL collaborates closely with international governments and organizations in developing new technologies and systems that allow increased participation and quality in education. We are proud to host the Learning at Scale conference next year. Come join us!

------------------------------------

Learning at Scale investigates large-scale, technology-mediated learning environments with many learners and few experts to guide them. Large-scale learning environments are incredibly diverse: massive open online courses (e.g. from edX or Coursera, or connectivist MOOCs), intelligent tutoring systems (e.g. Dreambox or Cognitive Tutor), open learning courseware (e.g. MIT’s OpenCourseware), learning games (e.g.DragonBox), citizen science communities (e.g. Vital Signs), collaborative programming communities (such as Scratch), community tutorial systems (e.g. StackOverflow), shared critique communities (such as DeviantArt), and the countless informal communities of learners (e.g.the Explain It Like I’m Five sub-Reddit) are all examples of learning at scale. These systems either depend upon large numbers of learners, or they are enriched through use of data from previous use by many learners. They share a common purpose--to increase human potential--and a common infrastructure of data and computation to enable learning at scale.

Investigations of learning at scale naturally bring together two different research communities. Since the purpose of these environments is the advancement of human learning, learning scientists are drawn to study established and emerging forms of knowledge production, transfer, modeling, and co-creation. Since large-scale learning environments depend upon complex infrastructures of data storage, transmission, computation, and interface, computer scientists are drawn to the field as powerful site for the development and application of advanced computational techniques. At its very best, the Learning at Scale community supports the interdisciplinary investigation of these important sites of learning and human development.

The ultimate aim of the Learning at Scale community is the enhancement of human learning. In emerging education technology genres (such as intelligent tutors in the 1980s or MOOCs circa 2012), researchers often use a variety of proxy measures for learning, including measures of participation, persistence, completion, satisfaction, and activity. In the early stages of investigating a technological genre, it is entirely appropriate to begin lines of research by investigating these proxy outcomes. As lines of research mature, however, it is important for the community of researchers to hold each other to increasingly high standards and expectations for directly investigating thoughtfully-constructed measures of learning. In the early days of research on MOOCs, for instance, many researchers documented correlations between measures of activity (videos watched, forums posted, clicks) and other measures of activity, and between measures of activity and outcome proxies including participation, persistence, and completion. As MOOC research matures, additional studies that document these kinds of correlations should give way to more direct measures of student learning and of evidence of instructional techniques, technological infrastructures, learning habits, and experimental interventions that improve learning. As a community, we believe that that the very best of our early papers define a foundation to build upon but are not an established standard to aspire to.

We encourage diverse topical submissions to our conference, and example topics include but are not limited to the following topics. In all topics, we encourage a particular focus on contexts and populations that have been historically not well served.

1. Novel assessments of learning, drawing on computational techniques for automated, peer, or human-assisted assessment
2. New methods for validating inferences about human learning from established measures, assessments, or proxies.
3. Experimental interventions in large-scale learning environments that show evidence of improved learning outcomes
* Evidence of heterogenous treatment effects in large experiments that point the way towards potential personalized or adaptive interventions
* Domain independent interventions inspired by social psychology, behavioral economics, and related fields with the potential to benefit learners in diverse fields and disciplines
* Domain specific interventions inspired by discipline-based educational research that have the potential to advance teaching and learning of specific ideas, misconceptions, and theories within a field
4. Methodological papers that address challenges emerging from the “replication crisis” and “new statistics” in the context of Learning at Scale research:
* Best practices in open science, including pre-planning and pre-registration
* Alternatives to conducting and reporting null hypothesis significance testing
* Best practices in the archiving and reuse of learner data in safe, ethical ways
* Advances in differential privacy and other methods that reconcile the opportunities of open science with the challenges of privacy protection
5. Tools or techniques for personalization and adaptation, based on log data, user modeling, or choice.
6. The blended use of large-scale learning environments in specific residential or small-scale learning communities, or the use of sub-groups or small communities within large-scale learning environments
7. The application of insights from small-scale learning communities to large-scale learning environments
8. Usability studies and effectiveness studies of design elements for students or instructors, including:
* Status indicators of student progress
* Status indicators of instructional effectiveness
* Tools and pedagogy to promote community, support learning, or increase retention in at-scale environments
9. Log analysis of student behavior, e.g.:
* Assessing reasons for student outcome as determined by modifying tool design
* Modeling students based on responses to variations in tool design
* Evaluation strategies such as quiz or discussion forum design
* Instrumenting systems and data representation to capture relevant indicators of learning.
10. New tools and techniques for learning at scale, including:
* Games for learning at scale
* Automated feedback tools (for essay writing, programming, etc)
* Automated grading tools
* Tools for interactive tutoring
* Tools for learner modeling
* Tools for representing learner models
* Interfaces for harnessing learning data at scale
* Innovations in platforms for supporting learning at scale
* Tools to support for capturing, managing learning data
* Tools and techniques for managing privacy of learning data

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
IEEE-Ei/Scopus-CWCBD 2025   2025 6th International Conference on Wireless Communications and Big Data (CWCBD 2025) -EI Compendex
PAKDD 2025   29th Pacific-Asia Conference on Knowledge Discovery and Data Mining
SPIE-Ei/Scopus-CMLDS 2025   2025 2nd International Conference on Computing, Machine Learning and Data Science (CMLDS 2025) -EI Compendex & Scopus
CSITEC 2025   11th International Conference on Computer Science, Information Technology
COIT 2025   5th International Conference on Computing and Information Technology
ACM SAC 2025   40th ACM/SIGAPP Symposium On Applied Computing