Truly capture synonym12/19/2023 Did you really know kung fu, Neo? What the constructivists sayĬonstructivists like Jean Piaget built on the notion of perception as knowledge to consider the symbolic concepts that contain those perceptions. George Berkeley, in “A Treatise Concerning the Principles of Human Knowledge,” writes, “As it is impossible for me to see or feel anything without an actual sensation of that thing, so it is impossible for me to conceive in my thoughts any sensible thing or object distinct from the sensation or perception of it.” Of course, that opens us up scenarios like The Matrix where the perceptions are false. Plenty of philosophers have postulated that knowledge comes from perceiving and interacting with the world. It wasn’t the silver bullet he hoped it was, though it does make for a decent framework to use in thinking about knowledge. Of course, that just set philosophers a-quibbling about the natures of justification, belief, and truth. This isn’t a JTB: I believe it, it was justified by the information, but the information I had wasn’t true. However, because of remodeling, the bank is closed. I’ve cleared my schedule, checked the bank hours on their website, and set my alarm. Take the proposition that I’m going to the bank tomorrow to deposit a check. In 1963, Edmund Gettier tried to put a simple definition on knowledge with the paper, “ Is Justified True Belief Knowledge?” In short, to have knowledge of something, that thing has to be true, you have to believe that it’s true, and you have to be justified in believing that it’s true-a justified true belief (JTB). Trying to pin philosophers down on a definition of knowledge has likely driven many PhD students out of academia. Philosophers have wrestled with the nature of knowledge for thousands of years. This is a realm where the experts-how does one know anything-are outside the realm of technology. But whether they truly “know” something is up for debate. They know various weight and bias values in the billions (sometimes trillions) of parameters that allow them to reliably produce correct answers to a variety of challenging human-made tests. They convert words, sentences, and documents into semantic vectors and know the relative meanings of pieces of language based on these embeddings. But does that mean these LLMs actually know anything about eggs, unicorns, or Snoop Dogg? LLMs can create novel solutions to stacking arbitrary objects, improve at drawing unicorns in TikZ, and explain quantum theory in the style of Snoop Dogg. While some might classify them as merely a really cool new form of UI, others think that this may be the start of artificial general intelligence. Large language models (LLMs) have only just emerged into mainstream thought, and already they’ve shown themselves to be a powerful tool for interacting with data.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |