Blog

  • Project 28

    AI Review


    For this project, I am going to review five AI tools: CoPilot, ChatGPT, Claude, MetaAI, and Perplexity.

    In this exercise, I will research primary sources relevant to the American Revolution.

    Let’s jump in!

    One (A.) on the list is CoPilot. CoPilot is powered by Microsoft Azure.

    I used a simple search: Primary sources for the American Revolution.

    Copilot provides limited responses for this search, and I will elaborate on this as we review the other four AI search engines.


    Two (B.), Claude. Claude is powered by Anthropic.

    Claude provided more search results, specifically citing The Declaration of Independence, which CoPilot did not. This is a result that was missing from CoPilot.


    Three (C.), ChatGPT. ChatGPT is powered by OpenAI.

    Let’s take a closer look at the most widely used AI tool. My search returned government documents, personal correspondence, pamphlets, newspapers, military records, speeches, and Native American accounts. Notably, the search results listed the Declaration of Independence and the Articles of Confederation as the top results.


    Four (D.), Perplexity. Perplexity is powered by Aravind Srinivas and other engineers.

    As I examine Perplexity, I see similar results to ChatGPT, with the Declaration of Independence as a top result. Additionally, the Bill of Rights and the Constitution of the United States appear among the top results. Like ChatGPT, Perplexity organizes the information into categories, returning official documents, military records, and personal accounts.


    Five (E.), MetaAI. MetaAI is powered by Meta Platforms (formerly Facebook).

    MetaAI returned similar search results to Perplexity and ChatGPT. The results are categorized, including government documents, personal correspondence, diaries and journals, and pamphlets. Notably, the Declaration of Independence and the Articles of Confederation are also among the top results.


    AI results can vary in how well they align with disciplinary standards in history. While it can find primary sources, its effectiveness depends on the quality and range of available data and may focus on more accessible or popular documents, which can affect historical accuracy and context. AI can pull various sources from the internet, including public domain documents, published materials, digital repositories, and news articles. However, it might miss restricted content, unpublished archives, rare documents, and oral histories that aren’t easily found online, leading to a skewed representation of widely digitized materials. Additionally, AI might overlook materials that require a subscription to review. For the American Revolution, the primary sources that are most easily found include government documents like the Declaration of Independence, military records, pamphlets, and speeches. AI generally considers primary sources to be original documents which are easily accessible online across various platforms, but researchers must still evaluate these sources to ensure they meet the standards of historical inquiry.


    So there you have it—a quick overview of five AI search engines!

    Cheers!

  • Project 22 & 23

    Excel built-in visualizations

    Welcome, everyone! In this project, I will explore Excel’s built-in data visualization tools. Excel offers some excellent and useful features that are often overlooked. I’m here to review a few of them and demonstrate their use with a dataset on harassment and bullying based on race.

    The data set I will be reviewing is located here.

    I am also going to go through a checklist of questions for the visualization.

    Let’s get started!


    Checklist for visualizations
    · Assess your data: discrete or continuous?
    · Appropriate scale: Too big? Too small? Need a break?
    · How will you label the data? What order? What data most
    essential?
    · Use graphic variables carefully: shape, tone, texture, and color
    convey meanings
    · Proximity of labels to values is optimal for reducing cognitive
    load; make it easy for the viewer
    · Never use changes in area to show a simple increase in value.
    · Review the graph to see if it contains elements that are
    “incidental” artifacts of production rather than meaningful ones.
    · While illustrations, images, or exaggerated forms may be
    considered “junk,” they can also help set a theme or tone when
    used effectively.


    Here is the data set that we are reviewing.

    After reviewing our data set let’s jump right into answering our questions.

    1. This is a discrete data set. Discrete data means distinct values that can be counted.
    2. Scale: This scale is appropriate for the data. However, a vertical bar graph could add more depth. Here’s an example using data from the State of Illinois.

    Here is a line graph representing the State of Michigan. You can compare how the data is displayed side by side. Which presentation do you prefer? Moving into another one of our questions, How will you label the data? What order? What data is most essential? It makes sense to label the data by the state you are focusing on. As shown in the Excel file, the data is organized alphabetically by state, which makes sense. The most important data point here is the percentage of schools reporting harassment and bullying, found in column Y. Without schools reporting this data, it would not exist.

    It is best practice to use shape, tone, texture, and color thoughtfully to highlight key insights. Keep labels close to values for easier interpretation and processing. For example, see from the two graphs I created how simple the layout, shape, and color are representing the data? Let’s review the graphs. Do you notice any extra lines or shading that seem out of place? Are there any incidental artifacts? I think both graphs are well-designed and clearly represent the data for each state.

    For these two graphs, I selected a clear text font to ensure readability and used blue to represent the data set, which adds a professional and trustworthy feel. This choice helps enhance clarity and makes it easier for viewers to understand the information.

    Just for context, here is a line graph that poorly represents the data from the States of Illinois and Michigan.

    The dark background combined with dark colors for the data makes it hard to read. The overlapping numbers further obscure the information. To maintain clarity and readability, it’s important to present data in a way that is easy to view. I believe using a dark background with dark red and blue colors for the data is not a good choice.


    Here’s another example where the same color represents both states, with Michigan in a lighter shade and Illinois in a darker shade. This still maintains a clean look, with bold colors that create a dramatic tone for Illinois, which has a higher report of students being harassed.


    As you can see, Excel offers many options for creating data visualizations, but it’s essential to consider your audience. When building visualizations for presentations, it’s best to keep them clean, easy to read, and eye-catching to effectively represent the data.

    Thank you for reviewing Excel visualizations with me!

    Cheers!

  • Project 19

    Network diagrams

    Greetings, welcome back!

    In this post, I am going to review a citation network diagram. I have included the website and diagram for review. There are also questions I want to review:

    1. What is suggested by co-citers that references within a group of 8-10 authors but no others?

    2. Can I learn anything from this network about gender, race, and seniority in the field?

    3. What sorts of information does this share with a relative newcomer to the field instead of an expert?

    Let’s jump in!


    Starting the review of the diagram feels overwhelming. I can see names, dates, and numbers. It is not until you blow the image up that the names start to separate themselves and are no longer overlapping. This co-citation pattern is based on published articles from 1993 to 2013 across four different journals.

    Below you will see an example of part of the diagram. My laptop couldn’t get a full screenshot.

    The creator of this visual aimed to show what high-prestige, professional, academic philosophy has focused on over the past twenty years.

    Now, let’s jump into our first question. What is suggested by co-citers that references within a group of 8-10 authors but no others? The more a paper is cited, the more important it’s likely to be. But when two papers are often cited together, they are likely connected to a broader research question, ongoing issue, or possibly a key discussion in the field that is ongoing.

    Our second question is, can I learn anything from this network about gender, race, and seniority in the field? I see citations dating back to the 1950s, ’60s, ’70s, and ’80s. Historically, during these decades, white men dominated philosophical publications. While I can’t confirm this just from the graph without reviewing every name, I strongly suspect that white men were the primary contributors to these articles. For example, the designer mentions David Lewis, who was a very well-known Philosopher of the 20th century.

    Our final question is, What sorts of information does this share with a relative newcomer to the field instead of an expert? A newcomer to the field is getting a sense of what is being discussed within the field and also how dispensary conversations are connected. I appreciate seeing the different clusters. The two main clusters focus on highly cited items related to Metaphysics, while the smaller clusters center on various areas of study, such as the Divided Mind and Ethics.

    Below you will see the smaller clusters.

    This is a unique dataset, limited to two decades and four philosophy journals. The designer created it for personal use, and I think it’s fantastic when someone develops what works for them and shares it with others. It opens up new opportunities for collaboration.

    As always, if you have questions or comments please them below.

    Cheers!

  • Project 25

    Zotero

    Hello everyone! Welcome to another post about data. Today, I’ll be discussing Zotero, a fantastic program for managing bibliography resources.


    If you are interested in this resource, I highly suggest you download the program and add the extension to your favorite web browser.

    Let’s get started!


    After downloading Zotero and adding the Safari extension, I will use the Waldo Library catalog to collect 25 titles on World War II and the Holocaust.

    If you need assistance with downloading Zotero and adding an extension to your favorite web browser, I have included a short YouTube video here.

    Step1:

    Conduct search within Waldo Library catalog.

    Step 2:

    Gather 25 titles. After selecting the articles I want to review, I will go to the top of the page, next to the address bar, and click on the blue folder.

    Step 3:

    Select all. This ensures that all 25 articles will be added to Zotero and placed in the folder I created, HST 5891.

    Step 4:

    Review the collected data (right window). This step is important because data doesn’t always transfer correctly, and key information for references could be missing. We are also checking for elements that may need to be cleaned for future use in a bibliography.

    Step 5:

    Once you have reviewed your data and confirmed its accuracy, you can begin using your resources for projects.

    Fun fact, Zotero also saves PDF files into your folder, giving you access at any time!

    Now that I have walked through selecting multiple items using Zotero, let’s take a minute to review. Zotero data doesn’t always transfer correctly, so it’s important to check that everything is accurate. Reviewing your data ensures accuracy for future use. It is important to check that bibliographic information is correct when using Zotero because errors in citations can affect the credibility and accuracy of your research.

    NOTE: Please do not plagiarize, use this resource instead!

    Zotero simplifies research by helping users collect and organize bibliographic data with its browser extension and API. The browser extension allows users to scrape data from the website, making it easy to gather sources like articles on World War II with just one click. Zotero’s API enables users to interact with their library programmatically, fetch structured bibliographic data, and create citation collections. These features help researchers efficiently gather and manage citation information, reducing errors and streamlining workflows.

    Any questions, please use the comments section below.

    Cheers!

  • Project 18

    Stanford Spatial History project

    Welcome back!

    This week, I will compare data layouts, including color, formatting, and presentation. Additionally, I will address the following questions:

    • What choices were made by the designers as they prepared to display the data?
    • What limiting factors did they have to take into account?
    • What stories are they telling?
    • Who is the audience and how much do they know before looking at the visualization?
    • What further questions might the visualization inspire others to pursue?

    I will be reviewing projects from Stanford Spatial History project which is website that is no longer being updated as of 2022 (FYI). You can locate the website here.


    For my first example I am reviewing the Holocaust Geographies Collaborative.

    Next, I am interested in the new order data from 1938 to 1945. Figure 6, Revisionist Power Land Area Changes, allows us to review territorial changes up to 1945 and observe how the data shifts each year. In this example, I have highlighted the years 1938 and 1944.

    The information layout is clear and easy to follow. I especially appreciate the toggle options that allow for viewing each year and observing how the data changes over time. The color choices are simple yet effective, with black, for example, used to highlight dramatic gains in Germany.

    The designers focused on land area changes during WWII, telling the story of how revisionist powers fared during the war. The intended audience is likely individuals with a general interest in history, such as students or researchers. Before looking at the visualization, they may have a basic understanding of WWII but might not be familiar with the specific territorial changes or the role of revisionist powers.

    I believe they chose a bar graph to represent the data because it clearly illustrates yearly gains and losses. As for limiting factors, based on the reading, cartographers used borders to show prejudice. Propaganda was widespread during this time, and what better way to portray it than by depicting borders, disputed territories, or occupation zones in a specific way? These maps could influence public opinion and reinforce resistance against the occupiers.

    One question the visualization might inspire is: How do the land area changes during WWII compare to territorial shifts in other major conflicts in history?

    Okay, let’s move onto our second data visualization.

    Pages: 1 2

  • Project 39

    Photogrammetry

    Welcome!

    In this project, I explored photogrammetry, a topic I’ve been interested in for its applications in museum curation and archive special collections.

    For this project, I reviewed Chris Reilly’s Photogrammetry for Product Design and AEC video featured on LinkedIn Learning located here.

    Photogrammetry for Museum Curation & Archival Special Collections

    I believe photogrammetry offers valuable applications for museum curation and archival special collections, making it an essential tool for preservation, documentation, and accessibility. After watching Chris Reilly’s LinkedIn Learning video, I gained an understanding of how this technology can be incorporated into my work as a curator and archivist.

    Reilly provided a detailed walkthrough of the photogrammetry process, emphasizing proper lighting, consistent angles, and image overlap to generate accurate 3D models. His breakdown of using software like Agisoft Metashape stressed the importance of precision in producing high-quality digital representations. This level of detail is critical in collections management, where maintaining the integrity of historical materials is a top priority.

    A key takeaway for me was how photogrammetry enhances digital preservation efforts. Museums and archives can create 3D models of fragile or rare objects, reducing the need for physical handling while still providing access for researchers and the public. This is especially beneficial for items that are difficult to display due to their condition or size.

    Integrating photogrammetry into archive special collections improves accessibility by allowing institutions to share interactive digital models online making collections more accessible. It also serves as a crucial documentation tool for conservation and inventory management. As someone working in collections, I see photogrammetry as a bridge between preservation and public engagement, ensuring that materials remain accessible for research and education purposes.

    This was my first introductory course in photogrammetry, and I’m excited to apply these techniques in museum and archival settings in the near future!

    Below, I have added a few screenshots taken throughout Reilly’s course. If you are interested in photogrammetry and how it may provide assistance in your profession, I have included the course link above. Once completed you will receive a certificate of completion.

    Cheers!




  • Project 17

    data.gov – towed cars in Chicago, Illinois

    Hello everyone! Welcome back.

    In this project I am going to review the website data.gov. I am researching vehicles towed in Chicago, Illinois here. Why you ask, am I researching towed vehicles in the city of Chicago…well for this assignment of course! I also use to live in Chicago, and wouldn’t you know, I have had my vehicle towed! Ha.

    Using data.gov I am going to narrow down a specific vehicle and color. I am then going to review the dataset and narrow down where those vehicles are towed and if they are residents or non residents of Illinois. I’m also going to answer questions such as, why is this information important to the city of Chicago. What purpose does it serve and if there is information missing that could improve the data.

    Let’s get started!

    Step 1:

    Search towed vehicles in the city of Chicago.

    Step 2:

    Select data to review.

    NOTE: This data is within the last 90 days.

    Step 3:

    Choose dataset to review and upload file to OpenRefine software to analyze.

    Step 4:

    In this step, I will refine my search to focus on Honda vehicles that are four-door and red. Using the text facet feature, I have filtered the results to display only Honda, 4D, red vehicles.

    Step 5:

    Reviewing the results.

    Alright, here are my final results. What stands out right away? I notice that all but one of the remaining vehicles are from Illinois. The only exception is a vehicle from Indiana that was towed. Second not all vehicles listed have a registered plate number.

    Step 6:

    Let’s answer our questions:

    1. Why is this data information important to the city of Chicago.
      • Tracking towed vehicles helps the City of Chicago enforce parking regulations, reduce congestion, and maintain public safety. It allows vehicle owners to locate and retrieve their cars efficiently while ensuring fines and fees are collected, generating revenue for the city. Additionally, tracking provides data to improve policies and prevent unauthorized towing.
    2. What purpose does the data serve and is there information missing that could improve the data.
      • This dataset serves to assist with parking enforcement, revenue tracking, and public safety by detailing where vehicles are towed in Chicago. However, the missing vehicle models except for the Indiana registered car, and the absence of plate numbers for two vehicles reduce its effectiveness, as including these details would enhance identification and analysis.

    So there you have it! We’ve reviewed a dataset from data.gov, narrowed down a search to analyze the data, and examined its purpose along with the missing information.

    As always, please leave a comment if you have questions.

    Cheers!

  • Project 15

    Proudly Powered by Omeka

    Welcome back! In this week’s post I am going to review educational institutions who are using Omeka.

    The first website I am reviewing is by The Sheridan Libraries and Museums. https://exhibits.library.jhu.edu

    The Arthur Friedheim Library holds the Rosa Ponselle collection which was donated in 2015 by Lester Dequaine/Frank Chiarenza Foundation.

    Now, let’s take a look…

    Force of Destiny: The Rosa Ponselle Collection at Peabody

    This website tells the story of legendary opera singer, Rosa Ponselle.

    Rosa Ponselle was the first American opera star of the 1920’s and this website is a tribute to her stardom.

    The website is laid out chronologically with clear tabs indicating each page.

    The creators traced Rosa’s life from childhood through her retirement and passing, presenting a complete and chronological narrative.

    It is impressive how the creators incorporated various types of metadata into this website. Throughout, you’ll find photographs, newspaper clippings, brochures, postage stamps, and even audio recordings. I especially appreciated the audio recordings, as they allow viewers to hear Rosa sing, adding a unique and immersive element to the experience.

    If you enjoy presidential history, Rosa was invited to perform at the White House for President Franklin Delano Roosevelt in 1921.

    Rosa had a remarkable career at the Metropolitan Opera, performing from 1918 to 1937. She gave over 400 performances, including one in which she wore blackface makeup. Today, this practice is recognized as racist, and the website has removed the photograph of Rosa in her role as Selika in L’Africaine. However, the image is still accessible through a provided link. I too have provided a link to access the image here. It is a stunning portrait but I agree with the creator to have it available and not showcased.

    As you explore the site, you’ll come across various objects. Clicking on any item, whether a program or a photograph, reveals the metadata created for each one. Dublin Core is the metadata schema being used. Tags are used throughout the site to help locate and connect the collection. Additionally, plugins provide access to more information, linking related objects, people, and collections. See examples below.

    I encourage you to explore the entire site, navigating through each page to review the objects and their metadata. If you’re feeling nostalgic, you can listen to the audio recordings and step back in time to the Roaring ’20s. Enjoy the tour!

    Cheers!

    Pages: 1 2

  • Project 14

    LCSH Subject Heading changes

    Welcome back everyone! I want to highlight this project because of the controversy surrounding the subject. Ha! No pun intended.

    This aspect of Library of Congress Subject Headings (LCSH) will always raise ethical concerns about proper cataloging practices. There has been and will continue to be a push for social justice in how the Library of Congress updates subject headings. Nearly every public library in the United States follows this system, and it is thanks to the efforts of public librarians, dedicated university students, and impartial advocates, outdated and harmful language is challenged. The continued use of marginalizing terminology in cataloging reinforces mental and emotional harm to vulnerable communities, reinforcing the need for ongoing advocacy and reform.

    In this project I am going to review two LCSH subject headings that have recently been updated. I will also walk you through how I located those subject headings in the archived Library of Congress Subject Heading PDF files located on their website here https://www.loc.gov/aba/publications/FreeLCSH/archivedlcsh.html

    Pages: 1 2 3

  • Omeka Project

    Contributing to Omeka website

    Welcome back everyone!

    In this project, I will guide you through the process of setting up an Omeka account as a contributor on Professor Dr. Hadden’s website and contributing two items to that account.

    The two items I will be using are as follows:

    • image titled David
    • image titled Thinking Woman

    Step 1:

    I joined Dr. Hadden’s Omeka website as a contributor through an email invitation. After confirming my access, I began contributing two items to the site: an image titled David, which is a photograph of the statue of David, and Thinking Woman, a print on canvas from an original painting.

    Step 2:

    In this step I will begin by adding my first item, image titled David.

    You will see when you add an item in Omeka, you are using Dublin Core elements. If you have never worked with Dublin Core, I have posted a link here Dublin Core for reference. I highly recommend reviewing the Dublin Core elements before beginning a project.

    Following the list of Dublin Core elements I create metadata for each element regarding the image title David.

    NOTE: It is important to note that I am creating a record for the statue of David, using an image of the statue as my resource.

    Step 3:

    Moving on to the next category, I add the type of metadata for the image titled David.

    Here is where I am creating the metadata for the image of the statue.

    Step 4:

    Moving on to the next category, I am going to upload my image file, Statue of David.

    Step 5:

    Now, moving on to the final category, I will add tags to this item. Tags will make it easier for people to locate the item.

    I selected the most common tags to accurately represent the item David.

    After adding the tags, I selected the Add Item button and there you have it! The item is now live on the website for viewing.

    Final Review!

    You will see the item David has been Created.

    Pages: 1 2